WorldWideScience

Sample records for machine size scaling

  1. Very Large-Scale Neighborhoods with Performance Guarantees for Minimizing Makespan on Parallel Machines

    NARCIS (Netherlands)

    Brueggemann, T.; Hurink, Johann L.; Vredeveld, T.; Woeginger, Gerhard

    2006-01-01

    We study the problem of minimizing the makespan on m parallel machines. We introduce a very large-scale neighborhood of exponential size (in the number of machines) that is based on a matching in a complete graph. The idea is to partition the jobs assigned to the same machine into two sets. This

  2. Large-scale Ising-machines composed of magnetic neurons

    Science.gov (United States)

    Mizushima, Koichi; Goto, Hayato; Sato, Rie

    2017-10-01

    We propose Ising-machines composed of magnetic neurons, that is, magnetic bits in a recording track. In large-scale machines, the sizes of both neurons and synapses need to be reduced, and neat and smart connections among neurons are also required to achieve all-to-all connectivity among them. These requirements can be fulfilled by adopting magnetic recording technologies such as race-track memories and skyrmion tracks because the area of a magnetic bit is almost two orders of magnitude smaller than that of static random access memory, which has normally been used as a semiconductor neuron, and the smart connections among neurons are realized by using the read and write methods of these technologies.

  3. Chemically intuited, large-scale screening of MOFs by machine learning techniques

    Science.gov (United States)

    Borboudakis, Giorgos; Stergiannakos, Taxiarchis; Frysali, Maria; Klontzas, Emmanuel; Tsamardinos, Ioannis; Froudakis, George E.

    2017-10-01

    A novel computational methodology for large-scale screening of MOFs is applied to gas storage with the use of machine learning technologies. This approach is a promising trade-off between the accuracy of ab initio methods and the speed of classical approaches, strategically combined with chemical intuition. The results demonstrate that the chemical properties of MOFs are indeed predictable (stochastically, not deterministically) using machine learning methods and automated analysis protocols, with the accuracy of predictions increasing with sample size. Our initial results indicate that this methodology is promising to apply not only to gas storage in MOFs but in many other material science projects.

  4. Size-scaling of tensile failure stress in boron carbide

    Energy Technology Data Exchange (ETDEWEB)

    Wereszczak, Andrew A [ORNL; Kirkland, Timothy Philip [ORNL; Strong, Kevin T [ORNL; Jadaan, Osama M. [University of Wisconsin, Platteville; Thompson, G. A. [U.S. Army Dental and Trauma Research Detachment, Greak Lakes

    2010-01-01

    Weibull strength-size-scaling in a rotary-ground, hot-pressed boron carbide is described when strength test coupons sampled effective areas from the very small (~ 0.001 square millimeters) to the very large (~ 40,000 square millimeters). Equibiaxial flexure and Hertzian testing were used for the strength testing. Characteristic strengths for several different specimen geometries are analyzed as a function of effective area. Characteristic strength was found to substantially increase with decreased effective area, and exhibited a bilinear relationship. Machining damage limited strength as measured with equibiaxial flexure testing for effective areas greater than ~ 1 mm2 and microstructural-scale flaws limited strength for effective areas less than 0.1 mm2 for the Hertzian testing. The selections of a ceramic strength to account for ballistically-induced tile deflection and to account for expanding cavity modeling are considered in context with the measured strength-size-scaling.

  5. Classification of large-sized hyperspectral imagery using fast machine learning algorithms

    Science.gov (United States)

    Xia, Junshi; Yokoya, Naoto; Iwasaki, Akira

    2017-07-01

    We present a framework of fast machine learning algorithms in the context of large-sized hyperspectral images classification from the theoretical to a practical viewpoint. In particular, we assess the performance of random forest (RF), rotation forest (RoF), and extreme learning machine (ELM) and the ensembles of RF and ELM. These classifiers are applied to two large-sized hyperspectral images and compared to the support vector machines. To give the quantitative analysis, we pay attention to comparing these methods when working with high input dimensions and a limited/sufficient training set. Moreover, other important issues such as the computational cost and robustness against the noise are also discussed.

  6. Ultraprecision machining. Cho seimitsu kako

    Energy Technology Data Exchange (ETDEWEB)

    Suga, T [The Univ. of Tokyo, Tokyo (Japan). Research Center for Advanced Science and Technology

    1992-10-05

    It is said that the image of ultraprecision improved from 0.1[mu]m to 0.01[mu]m within recent years. Ultraprecision machining is a production technology which forms what is called nanotechnology with ultraprecision measuring and ultraprecision control. Accuracy means average machined sizes close to a required value, namely the deflection errors are small; precision means the scattered errors of machined sizes agree very closely. The errors of machining are related to both of the above errors and ultraprecision means the combined errors are very small. In the present ultraprecision machining, the relative precision to the size of a machined object is said to be in the order of 10[sup -6]. The flatness of silicon wafers is usually less than 0.5[mu]m. It is the fact that the appearance of atomic scale machining is awaited as the limit of ultraprecision machining. The machining of removing and adding atomic units using scanning probe microscopes are expected to reach the limit actually. 2 refs.

  7. Large-Scale Machine Learning for Classification and Search

    Science.gov (United States)

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  8. Comparison of Machine Learning Techniques in Inferring Phytoplankton Size Classes

    Directory of Open Access Journals (Sweden)

    Shuibo Hu

    2018-03-01

    Full Text Available The size of phytoplankton not only influences its physiology, metabolic rates and marine food web, but also serves as an indicator of phytoplankton functional roles in ecological and biogeochemical processes. Therefore, some algorithms have been developed to infer the synoptic distribution of phytoplankton cell size, denoted as phytoplankton size classes (PSCs, in surface ocean waters, by the means of remotely sensed variables. This study, using the NASA bio-Optical Marine Algorithm Data set (NOMAD high performance liquid chromatography (HPLC database, and satellite match-ups, aimed to compare the effectiveness of modeling techniques, including partial least square (PLS, artificial neural networks (ANN, support vector machine (SVM and random forests (RF, and feature selection techniques, including genetic algorithm (GA, successive projection algorithm (SPA and recursive feature elimination based on support vector machine (SVM-RFE, for inferring PSCs from remote sensing data. Results showed that: (1 SVM-RFE worked better in selecting sensitive features; (2 RF performed better than PLS, ANN and SVM in calibrating PSCs retrieval models; (3 machine learning techniques produced better performance than the chlorophyll-a based three-component method; (4 sea surface temperature, wind stress, and spectral curvature derived from the remote sensing reflectance at 490, 510, and 555 nm were among the most sensitive features to PSCs; and (5 the combination of SVM-RFE feature selection techniques and random forests regression was recommended for inferring PSCs. This study demonstrated the effectiveness of machine learning techniques in selecting sensitive features and calibrating models for PSCs estimations with remote sensing.

  9. Transportation and Production Lot-size for Sugarcane under Uncertainty of Machine Capacity

    Directory of Open Access Journals (Sweden)

    Sudtachat Kanchala

    2018-01-01

    Full Text Available The integrated transportation and production lot size problems is important effect to total cost of operation system for sugar factories. In this research, we formulate a mathematic model that combines these two problems as two stage stochastic programming model. In the first stage, we determine the lot size of transportation problem and allocate a fixed number of vehicles to transport sugarcane to the mill factory. Moreover, we consider an uncertainty of machine (mill capacities. After machine (mill capacities realized, in the second stage we determine the production lot size and make decision to hold units of sugarcane in front of mills based on discrete random variables of machine (mill capacities. We investigate the model using a small size problem. The results show that the optimal solutions try to choose closest fields and lower holding cost per unit (at fields to transport sugarcane to mill factory. We show the results of comparison of our model and the worst case model (full capacity. The results show that our model provides better efficiency than the results of the worst case model.

  10. Graphene-based bimorphs for micron-sized, autonomous origami machines.

    Science.gov (United States)

    Miskin, Marc Z; Dorsey, Kyle J; Bircan, Baris; Han, Yimo; Muller, David A; McEuen, Paul L; Cohen, Itai

    2018-01-16

    Origami-inspired fabrication presents an attractive platform for miniaturizing machines: thinner layers of folding material lead to smaller devices, provided that key functional aspects, such as conductivity, stiffness, and flexibility, are persevered. Here, we show origami fabrication at its ultimate limit by using 2D atomic membranes as a folding material. As a prototype, we bond graphene sheets to nanometer-thick layers of glass to make ultrathin bimorph actuators that bend to micrometer radii of curvature in response to small strain differentials. These strains are two orders of magnitude lower than the fracture threshold for the device, thus maintaining conductivity across the structure. By patterning 2-[Formula: see text]m-thick rigid panels on top of bimorphs, we localize bending to the unpatterned regions to produce folds. Although the graphene bimorphs are only nanometers thick, they can lift these panels, the weight equivalent of a 500-nm-thick silicon chip. Using panels and bimorphs, we can scale down existing origami patterns to produce a wide range of machines. These machines change shape in fractions of a second when crossing a tunable pH threshold, showing that they sense their environments, respond, and perform useful functions on time and length scales comparable with microscale biological organisms. With the incorporation of electronic, photonic, and chemical payloads, these basic elements will become a powerful platform for robotics at the micrometer scale.

  11. TensorFlow: A system for large-scale machine learning

    OpenAIRE

    Abadi, Martín; Barham, Paul; Chen, Jianmin; Chen, Zhifeng; Davis, Andy; Dean, Jeffrey; Devin, Matthieu; Ghemawat, Sanjay; Irving, Geoffrey; Isard, Michael; Kudlur, Manjunath; Levenberg, Josh; Monga, Rajat; Moore, Sherry; Murray, Derek G.

    2016-01-01

    TensorFlow is a machine learning system that operates at large scale and in heterogeneous environments. TensorFlow uses dataflow graphs to represent computation, shared state, and the operations that mutate that state. It maps the nodes of a dataflow graph across many machines in a cluster, and within a machine across multiple computational devices, including multicore CPUs, general-purpose GPUs, and custom designed ASICs known as Tensor Processing Units (TPUs). This architecture gives flexib...

  12. Newton Methods for Large Scale Problems in Machine Learning

    Science.gov (United States)

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  13. Size reduction machine

    International Nuclear Information System (INIS)

    Fricke, V.

    1999-01-01

    The Size Reduction Machine (SRM) is a mobile platform capable of shearing various shapes and types of metal components at a variety of elevations. This shearing activity can be performed without direct physical movement and placement of the shear head by the operator. The base unit is manually moved and roughly aligned to each cut location. The base contains the electronics: hydraulic pumps, servos, and actuators needed to move the shear-positioning arm. The movable arm allows the shear head to have six axes of movement and to cut to within 4 inches of a wall surface. The unit has a slick electrostatic capture coating to assist in external decontamination. Internal contamination of the unit is controlled by a high-efficiency particulate air (HEPA) filter on the cooling inlet fan. The unit is compact enough to access areas through a 36-inch standard door opening. This paper is an Innovative Technology Summary Report designed to provide potential users with the information they need to quickly determine if a technology would apply to a particular environmental management problem. They also are designed for readers who may recommend that a technology be considered by prospective users

  14. Separating the Classes of Recursively Enumerable Languages Based on Machine Size

    Czech Academy of Sciences Publication Activity Database

    van Leeuwen, J.; Wiedermann, Jiří

    2015-01-01

    Roč. 26, č. 6 (2015), s. 677-695 ISSN 0129-0541 R&D Projects: GA ČR GAP202/10/1333 Grant - others:GA ČR(CZ) GA15-04960S Institutional support: RVO:67985807 Keywords : recursively enumerable languages * RE hierarchy * finite languages * machine size * descriptional complexity * Turing machines with advice Subject RIV: IN - Informatics, Computer Science Impact factor: 0.467, year: 2015

  15. New Balancing Equipment for Mass Production of Small and Medium-Sized Electrical Machines

    DEFF Research Database (Denmark)

    Argeseanu, Alin; Ritchie, Ewen; Leban, Krisztina Monika

    2010-01-01

    The level of vibration and noise is an important feature. It is good practice to explain the significance of the indicators of the quality of electrical machines. The mass production of small and medium-sized electrical machines demands speed (short typical measurement time), reliability...

  16. Real-time spot size camera for pulsed high-energy radiographic machines

    International Nuclear Information System (INIS)

    Watson, S.A.

    1993-01-01

    The focal spot size of an x-ray source is a critical parameter which degrades resolution in a flash radiograph. For best results, a small round focal spot is required. Therefore, a fast and accurate measurement of the spot size is highly desirable to facilitate machine tuning. This paper describes two systems developed for Los Alamos National Laboratory's Pulsed High-Energy Radiographic Machine Emitting X-rays (PHERMEX) facility. The first uses a CCD camera combined with high-brightness floors, while the second utilizes phosphor storage screens. Other techniques typically record only the line spread function on radiographic film, while systems in this paper measure the more general two-dimensional point-spread function and associated modulation transfer function in real time for shot-to-shot comparison

  17. A study of energy-size relationship and wear rate in a lab-scale high pressure grinding rolls unit

    Science.gov (United States)

    Rashidi Dashtbayaz, Samira

    This study is focused on two independent topics of energy-size relationship and wear-rate measurements on a lab-scale high pressure grinding rolls (HPGR). The first part of this study has been aimed to investigate the influence of the operating parameters and the feed characteristics on the particle-bed breakage using four different ore samples in a 200 mm x 100 mm lab-scale HPGR. Additionally, multistage grinding, scale-up from a lab-scale HPGR, and prediction of the particle size distributions have been studied in detail. The results obtained from energy-size relationship studies help with better understanding of the factors contributing to more energy-efficient grinding. It will be shown that the energy efficiency of the two configurations of locked-cycle and open multipass is completely dependent on the ore properties. A test procedure to produce the scale-up data is presented. The comparison of the scale-up factors between the data obtained on the University of Utah lab-scale HPGR and the industrial machine at the Newmont Boddington plant confirmed the applicability of lab-scale machines for trade-off studies. The population balance model for the simulation of product size distributions has shown to work well with the breakage function estimated through tests performed on the HPGR at high rotational speed. Selection function has been estimated by back calculation of population balance model with the help of the experimental data. This is considered to be a major step towards advancing current research on the simulation of particle size distribution by using the HPGR machine for determining the breakage function. Developing a technique/setup to measure the wear rate of the HPGR rolls' surface is the objective of the second topic of this dissertation. A mockup was initially designed to assess the application of the linear displacement sensors for measuring the rolls' weight loss. Upon the analysis of that technique and considering the corresponding sources of

  18. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  19. Less is more: regularization perspectives on large scale machine learning

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Deep learning based techniques provide a possible solution at the expanse of theoretical guidance and, especially, of computational requirements. It is then a key challenge for large scale machine learning to devise approaches guaranteed to be accurate and yet computationally efficient. In this talk, we will consider a regularization perspectives on machine learning appealing to classical ideas in linear algebra and inverse problems to scale-up dramatically nonparametric methods such as kernel methods, often dismissed because of prohibitive costs. Our analysis derives optimal theoretical guarantees while providing experimental results at par or out-performing state of the art approaches.

  20. Accelerating Relevance Vector Machine for Large-Scale Data on Spark

    Directory of Open Access Journals (Sweden)

    Liu Fang

    2017-01-01

    Full Text Available Relevance vector machine (RVM is a machine learning algorithm based on a sparse Bayesian framework, which performs well when running classification and regression tasks on small-scale datasets. However, RVM also has certain drawbacks which restricts its practical applications such as (1 slow training process, (2 poor performance on training large-scale datasets. In order to solve these problem, we propose Discrete AdaBoost RVM (DAB-RVM which incorporate ensemble learning in RVM at first. This method performs well with large-scale low-dimensional datasets. However, as the number of features increases, the training time of DAB-RVM increases as well. To avoid this phenomenon, we utilize the sufficient training samples of large-scale datasets and propose all features boosting RVM (AFB-RVM, which modifies the way of obtaining weak classifiers. In our experiments we study the differences between various boosting techniques with RVM, demonstrating the performance of the proposed approaches on Spark. As a result of this paper, two proposed approaches on Spark for different types of large-scale datasets are available.

  1. Molecular-Sized DNA or RNA Sequencing Machine | NCI Technology Transfer Center | TTC

    Science.gov (United States)

    The National Cancer Institute's Gene Regulation and Chromosome Biology Laboratory is seeking statements of capability or interest from parties interested in collaborative research to co-develop a molecular-sized DNA or RNA sequencing machine.

  2. Dual linear structured support vector machine tracking method via scale correlation filter

    Science.gov (United States)

    Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen

    2018-01-01

    Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.

  3. Single product lot-sizing on unrelated parallel machines with non-decreasing processing times

    Science.gov (United States)

    Eremeev, A.; Kovalyov, M.; Kuznetsov, P.

    2018-01-01

    We consider a problem in which at least a given quantity of a single product has to be partitioned into lots, and lots have to be assigned to unrelated parallel machines for processing. In one version of the problem, the maximum machine completion time should be minimized, in another version of the problem, the sum of machine completion times is to be minimized. Machine-dependent lower and upper bounds on the lot size are given. The product is either assumed to be continuously divisible or discrete. The processing time of each machine is defined by an increasing function of the lot volume, given as an oracle. Setup times and costs are assumed to be negligibly small, and therefore, they are not considered. We derive optimal polynomial time algorithms for several special cases of the problem. An NP-hard case is shown to admit a fully polynomial time approximation scheme. An application of the problem in energy efficient processors scheduling is considered.

  4. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems

    OpenAIRE

    Abadi, Martín; Agarwal, Ashish; Barham, Paul; Brevdo, Eugene; Chen, Zhifeng; Citro, Craig; Corrado, Greg S.; Davis, Andy; Dean, Jeffrey; Devin, Matthieu; Ghemawat, Sanjay; Goodfellow, Ian; Harp, Andrew; Irving, Geoffrey; Isard, Michael

    2016-01-01

    TensorFlow is an interface for expressing machine learning algorithms, and an implementation for executing such algorithms. A computation expressed using TensorFlow can be executed with little or no change on a wide variety of heterogeneous systems, ranging from mobile devices such as phones and tablets up to large-scale distributed systems of hundreds of machines and thousands of computational devices such as GPU cards. The system is flexible and can be used to express a wide variety of algo...

  5. Machine Learning for Big Data: A Study to Understand Limits at Scale

    Energy Technology Data Exchange (ETDEWEB)

    Sukumar, Sreenivas R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Del-Castillo-Negrete, Carlos Emilio [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-21

    This report aims to empirically understand the limits of machine learning when applied to Big Data. We observe that recent innovations in being able to collect, access, organize, integrate, and query massive amounts of data from a wide variety of data sources have brought statistical data mining and machine learning under more scrutiny, evaluation and application for gleaning insights from the data than ever before. Much is expected from algorithms without understanding their limitations at scale while dealing with massive datasets. In that context, we pose and address the following questions How does a machine learning algorithm perform on measures such as accuracy and execution time with increasing sample size and feature dimensionality? Does training with more samples guarantee better accuracy? How many features to compute for a given problem? Do more features guarantee better accuracy? Do efforts to derive and calculate more features and train on larger samples worth the effort? As problems become more complex and traditional binary classification algorithms are replaced with multi-task, multi-class categorization algorithms do parallel learners perform better? What happens to the accuracy of the learning algorithm when trained to categorize multiple classes within the same feature space? Towards finding answers to these questions, we describe the design of an empirical study and present the results. We conclude with the following observations (i) accuracy of the learning algorithm increases with increasing sample size but saturates at a point, beyond which more samples do not contribute to better accuracy/learning, (ii) the richness of the feature space dictates performance - both accuracy and training time, (iii) increased dimensionality often reflected in better performance (higher accuracy in spite of longer training times) but the improvements are not commensurate the efforts for feature computation and training and (iv) accuracy of the learning algorithms

  6. Non-machinery dialysis that achieves blood purification therapy without using full-scale dialysis machines.

    Science.gov (United States)

    Abe, Takaya; Onoda, Mistutaka; Matsuura, Tomohiko; Sugimura, Jun; Obara, Wataru; Sato, Toshiya; Takahashi, Mihoko; Chiba, Kenta; Abe, Tomiya

    2017-09-01

    An electrical or water supply and a blood purification machine are required for renal replacement therapy. There is a possibility that acute kidney injury can occur in large numbers and on a wide scale in the case of a massive earthquake, and there is the potential risk that the current supply will be unable to cope with acute kidney injury cases. However, non-machinery dialysis requires exclusive circuits and has the characteristic of not requiring the full-scale dialysis machines. We performed perfusion experiments that used non-machinery dialysis and recent blood purification machines in 30-min intervals, and the effectiveness of non-machinery dialysis was evaluated by the assessing the removal efficiency of potassium, which causes lethal arrhythmia during acute kidney injury. The non-machinery dialysis potassium removal rate was at the same level as continuous blood purification machines with a dialysate flow rate of 5 L/h after 15 min and continuous blood purification machines with a dialysate flow rate of 3 L/h after 30 min. Non-machinery dialysis required an exclusive dialysate circuit, the frequent need to replace bags, and new dialysate exchanged once every 30 min. However, it can be seen as an effective renal replacement therapy for crush-related acute kidney injury patients, even in locations or facilities not having the full-scale dialysis machines.

  7. Nano Mechanical Machining Using AFM Probe

    Science.gov (United States)

    Mostofa, Md. Golam

    Complex miniaturized components with high form accuracy will play key roles in the future development of many products, as they provide portability, disposability, lower material consumption in production, low power consumption during operation, lower sample requirements for testing, and higher heat transfer due to their very high surface-to-volume ratio. Given the high market demand for such micro and nano featured components, different manufacturing methods have been developed for their fabrication. Some of the common technologies in micro/nano fabrication are photolithography, electron beam lithography, X-ray lithography and other semiconductor processing techniques. Although these methods are capable of fabricating micro/nano structures with a resolution of less than a few nanometers, some of the shortcomings associated with these methods, such as high production costs for customized products, limited material choices, necessitate the development of other fabricating techniques. Micro/nano mechanical machining, such an atomic force microscope (AFM) probe based nano fabrication, has, therefore, been used to overcome some the major restrictions of the traditional processes. This technique removes material from the workpiece by engaging micro/nano size cutting tool (i.e. AFM probe) and is applicable on a wider range of materials compared to the photolithographic process. In spite of the unique benefits of nano mechanical machining, there are also some challenges with this technique, since the scale is reduced, such as size effects, burr formations, chip adhesions, fragility of tools and tool wear. Moreover, AFM based machining does not have any rotational movement, which makes fabrication of 3D features more difficult. Thus, vibration-assisted machining is introduced into AFM probe based nano mechanical machining to overcome the limitations associated with the conventional AFM probe based scratching method. Vibration-assisted machining reduced the cutting forces

  8. SIZE SCALING RELATIONSHIPS IN FRACTURE NETWORKS

    International Nuclear Information System (INIS)

    Wilson, Thomas H.

    2000-01-01

    The research conducted under DOE grant DE-FG26-98FT40385 provides a detailed assessment of size scaling issues in natural fracture and active fault networks that extend over scales from several tens of kilometers to less than a tenth of a meter. This study incorporates analysis of data obtained from several sources, including: natural fracture patterns photographed in the Appalachian field area, natural fracture patterns presented by other workers in the published literature, patterns of active faulting in Japan mapping at a scale of 1:100,000, and lineament patterns interpreted from satellite-based radar imagery obtained over the Appalachian field area. The complexity of these patterns is always found to vary with scale. In general,but not always, patterns become less complex with scale. This tendency may reverse as can be inferred from the complexity of high-resolution radar images (8 meter pixel size) which are characterized by patterns that are less complex than those observed over smaller areas on the ground surface. Model studies reveal that changes in the complexity of a fracture pattern can be associated with dominant spacings between the fractures comprising the pattern or roughly to the rock areas bounded by fractures of a certain scale. While the results do not offer a magic number (the fractal dimension) to characterize fracture networks at all scales, the modeling and analysis provide results that can be interpreted directly in terms of the physical properties of the natural fracture or active fault complex. These breaks roughly define the size of fracture bounded regions at different scales. The larger more extensive sets of fractures will intersect and enclose regions of a certain size, whereas smaller less extensive sets will do the same--i.e. subdivide the rock into even smaller regions. The interpretation varies depending on the number of sets that are present, but the scale breaks in the logN/logr plots serve as a guide to interpreting the

  9. Visuomotor Dissociation in Cerebral Scaling of Size

    NARCIS (Netherlands)

    Potgieser, Adriaan R. E.; de Jong, Bauke M.

    2016-01-01

    Estimating size and distance is crucial in effective visuomotor control. The concept of an internal coordinate system implies that visual and motor size parameters are scaled onto a common template. To dissociate perceptual and motor components in such scaling, we performed an fMRI experiment in

  10. Downscaling Coarse Scale Microwave Soil Moisture Product using Machine Learning

    Science.gov (United States)

    Abbaszadeh, P.; Moradkhani, H.; Yan, H.

    2016-12-01

    Soil moisture (SM) is a key variable in partitioning and examining the global water-energy cycle, agricultural planning, and water resource management. It is also strongly coupled with climate change, playing an important role in weather forecasting and drought monitoring and prediction, flood modeling and irrigation management. Although satellite retrievals can provide an unprecedented information of soil moisture at a global-scale, the products might be inadequate for basin scale study or regional assessment. To improve the spatial resolution of SM, this work presents a novel approach based on Machine Learning (ML) technique that allows for downscaling of the satellite soil moisture to fine resolution. For this purpose, the SMAP L-band radiometer SM products were used and conditioned on the Variable Infiltration Capacity (VIC) model prediction to describe the relationship between the coarse and fine scale soil moisture data. The proposed downscaling approach was applied to a western US basin and the products were compared against the available SM data from in-situ gauge stations. The obtained results indicated a great potential of the machine learning technique to derive the fine resolution soil moisture information that is currently used for land data assimilation applications.

  11. CT crown for on-machine scale calibration in Computed Tomography

    DEFF Research Database (Denmark)

    Stolfi, Alessandro; De Chiffre, Leonardo

    2016-01-01

    A novel artefact for on-machine calibration of the scale in 3D X-ray Computed Tomography (CT) is presented. The artefact comprises an invar disc on which several reference ruby spheres are positioned at different heights using carbon fibre rods. The artefact is positioned and scanned together...

  12. Finite size scaling theory

    International Nuclear Information System (INIS)

    Rittenberg, V.

    1983-01-01

    Fischer's finite-size scaling describes the cross over from the singular behaviour of thermodynamic quantities at the critical point to the analytic behaviour of the finite system. Recent extensions of the method--transfer matrix technique, and the Hamiltonian formalism--are discussed in this paper. The method is presented, with equations deriving scaling function, critical temperature, and exponent v. As an application of the method, a 3-states Hamiltonian with Z 3 global symmetry is studied. Diagonalization of the Hamiltonian for finite chains allows one to estimate the critical exponents, and also to discover new phase transitions at lower temperatures. The critical points lambda, and indices v estimated for finite-scaling are given

  13. Automated Bug Assignment: Ensemble-based Machine Learning in Large Scale Industrial Contexts

    OpenAIRE

    Jonsson, Leif; Borg, Markus; Broman, David; Sandahl, Kristian; Eldh, Sigrid; Runeson, Per

    2016-01-01

    Bug report assignment is an important part of software maintenance. In particular, incorrect assignments of bug reports to development teams can be very expensive in large software development projects. Several studies propose automating bug assignment techniques using machine learning in open source software contexts, but no study exists for large-scale proprietary projects in industry. The goal of this study is to evaluate automated bug assignment techniques that are based on machine learni...

  14. Fault size classification of rotating machinery using support vector machine

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Y. S.; Lee, D. H.; Park, S. K. [Korea Hydro and Nuclear Power Co. Ltd., Daejeon (Korea, Republic of)

    2012-03-15

    Studies on fault diagnosis of rotating machinery have been carried out to obtain a machinery condition in two ways. First is a classical approach based on signal processing and analysis using vibration and acoustic signals. Second is to use artificial intelligence techniques to classify machinery conditions into normal or one of the pre-determined fault conditions. Support Vector Machine (SVM) is well known as intelligent classifier with robust generalization ability. In this study, a two-step approach is proposed to predict fault types and fault sizes of rotating machinery in nuclear power plants using multi-class SVM technique. The model firstly classifies normal and 12 fault types and then identifies their sizes in case of predicting any faults. The time and frequency domain features are extracted from the measured vibration signals and used as input to SVM. A test rig is used to simulate normal and the well-know 12 artificial fault conditions with three to six fault sizes of rotating machinery. The application results to the test data show that the present method can estimate fault types as well as fault sizes with high accuracy for bearing an shaft-related faults and misalignment. Further research, however, is required to identify fault size in case of unbalance, rubbing, looseness, and coupling-related faults.

  15. Fault size classification of rotating machinery using support vector machine

    International Nuclear Information System (INIS)

    Kim, Y. S.; Lee, D. H.; Park, S. K.

    2012-01-01

    Studies on fault diagnosis of rotating machinery have been carried out to obtain a machinery condition in two ways. First is a classical approach based on signal processing and analysis using vibration and acoustic signals. Second is to use artificial intelligence techniques to classify machinery conditions into normal or one of the pre-determined fault conditions. Support Vector Machine (SVM) is well known as intelligent classifier with robust generalization ability. In this study, a two-step approach is proposed to predict fault types and fault sizes of rotating machinery in nuclear power plants using multi-class SVM technique. The model firstly classifies normal and 12 fault types and then identifies their sizes in case of predicting any faults. The time and frequency domain features are extracted from the measured vibration signals and used as input to SVM. A test rig is used to simulate normal and the well-know 12 artificial fault conditions with three to six fault sizes of rotating machinery. The application results to the test data show that the present method can estimate fault types as well as fault sizes with high accuracy for bearing an shaft-related faults and misalignment. Further research, however, is required to identify fault size in case of unbalance, rubbing, looseness, and coupling-related faults

  16. Effect of Machining Velocity in Nanoscale Machining Operations

    International Nuclear Information System (INIS)

    Islam, Sumaiya; Khondoker, Noman; Ibrahim, Raafat

    2015-01-01

    The aim of this study is to investigate the generated forces and deformations of single crystal Cu with (100), (110) and (111) crystallographic orientations at nanoscale machining operation. A nanoindenter equipped with nanoscratching attachment was used for machining operations and in-situ observation of a nano scale groove. As a machining parameter, the machining velocity was varied to measure the normal and cutting forces. At a fixed machining velocity, different levels of normal and cutting forces were generated due to different crystallographic orientations of the specimens. Moreover, after machining operation percentage of elastic recovery was measured and it was found that both the elastic and plastic deformations were responsible for producing a nano scale groove within the range of machining velocities from 250-1000 nm/s. (paper)

  17. Gene prediction in metagenomic fragments: A large scale machine learning approach

    Directory of Open Access Journals (Sweden)

    Morgenstern Burkhard

    2008-04-01

    Full Text Available Abstract Background Metagenomics is an approach to the characterization of microbial genomes via the direct isolation of genomic sequences from the environment without prior cultivation. The amount of metagenomic sequence data is growing fast while computational methods for metagenome analysis are still in their infancy. In contrast to genomic sequences of single species, which can usually be assembled and analyzed by many available methods, a large proportion of metagenome data remains as unassembled anonymous sequencing reads. One of the aims of all metagenomic sequencing projects is the identification of novel genes. Short length, for example, Sanger sequencing yields on average 700 bp fragments, and unknown phylogenetic origin of most fragments require approaches to gene prediction that are different from the currently available methods for genomes of single species. In particular, the large size of metagenomic samples requires fast and accurate methods with small numbers of false positive predictions. Results We introduce a novel gene prediction algorithm for metagenomic fragments based on a two-stage machine learning approach. In the first stage, we use linear discriminants for monocodon usage, dicodon usage and translation initiation sites to extract features from DNA sequences. In the second stage, an artificial neural network combines these features with open reading frame length and fragment GC-content to compute the probability that this open reading frame encodes a protein. This probability is used for the classification and scoring of gene candidates. With large scale training, our method provides fast single fragment predictions with good sensitivity and specificity on artificially fragmented genomic DNA. Additionally, this method is able to predict translation initiation sites accurately and distinguishes complete from incomplete genes with high reliability. Conclusion Large scale machine learning methods are well-suited for gene

  18. Particle size of radioactive aerosols generated during machine operation in high-energy proton accelerators

    International Nuclear Information System (INIS)

    Oki, Yuichi; Kanda, Yukio; Kondo, Kenjiro; Endo, Akira

    2000-01-01

    In high-energy accelerators, non-radioactive aerosols are abundantly generated due to high radiation doses during machine operation. Under such a condition, radioactive atoms, which are produced through various nuclear reactions in the air of accelerator tunnels, form radioactive aerosols. These aerosols might be inhaled by workers who enter the tunnel just after the beam stop. Their particle size is very important information for estimation of internal exposure doses. In this work, focusing on typical radionuclides such as 7 Be and 24 Na, their particle size distributions are studied. An aluminum chamber was placed in the EP2 beam line of the 12-GeV proton synchrotron at High Energy Accelerator Research Organization (KEK). Aerosol-free air was introduced to the chamber, and aerosols formed in the chamber were sampled during machine operation. A screen-type diffusion battery was employed in the aerosol-size analysis. Assuming that the aerosols have log-normal size distributions, their size distributions were obtained from the radioactivity concentrations at the entrance and exit of the diffusion battery. Radioactivity of the aerosols was measured with Ge detector system, and concentrations of non-radioactive aerosols were obtained using condensation particle counter (CPC). The aerosol size (radius) for 7 Be and 24 Na was found to be 0.01-0.04 μm, and was always larger than that for non-radioactive aerosols. The concentration of non-radioactive aerosols was found to be 10 6 - 10 7 particles/cm 3 . The size for radioactive aerosols was much smaller than ordinary atmospheric aerosols. Internal doses due to inhalation of the radioactive aerosols were estimated, based on the respiratory tract model of ICRP Pub. 66. (author)

  19. Design Of A Small-Scale Hulling Machine For Improved Wet-Processed Coffee.

    Directory of Open Access Journals (Sweden)

    Adeleke

    2017-08-01

    Full Text Available The method of primary processing of coffee is a vital determinant of quality and price. Wet processing method produces higher quality beans but is very labourious. This work outlines the design of a small scale cost-effective ergonomic and easily maintained and operated coffee hulling machine that can improve quality and productivity of green coffee beans. The machine can be constructed from locally available materials at a relatively low cost of about NGN 140000.00 with cheap running cost. The beaters are made from rubber strip which can deflect when in contact with any obstruction causing little or no stresses on drum members and reducing the risk of damage to both the beans and machine. The machine is portable and detachable which make it fit to be owned by a group of farmers who can move it from one farm to the other making affordability and running cost easier. The easily affordable and relatively low running cost may be further reduced by the fact that the machine is powered by 3.0 Hp petrol engine which is suitable for other purposes among the rural dwellers. The eventual construction of the machine will encourage more farmers to go into wet processing of coffee and reduce the foreign exchange hitherto lost to this purpose.

  20. Optimizing Distributed Machine Learning for Large Scale EEG Data Set

    Directory of Open Access Journals (Sweden)

    M Bilal Shaikh

    2017-06-01

    Full Text Available Distributed Machine Learning (DML has gained its importance more than ever in this era of Big Data. There are a lot of challenges to scale machine learning techniques on distributed platforms. When it comes to scalability, improving the processor technology for high level computation of data is at its limit, however increasing machine nodes and distributing data along with computation looks as a viable solution. Different frameworks   and platforms are available to solve DML problems. These platforms provide automated random data distribution of datasets which miss the power of user defined intelligent data partitioning based on domain knowledge. We have conducted an empirical study which uses an EEG Data Set collected through P300 Speller component of an ERP (Event Related Potential which is widely used in BCI problems; it helps in translating the intention of subject w h i l e performing any cognitive task. EEG data contains noise due to waves generated by other activities in the brain which contaminates true P300Speller. Use of Machine Learning techniques could help in detecting errors made by P300 Speller. We are solving this classification problem by partitioning data into different chunks and preparing distributed models using Elastic CV Classifier. To present a case of optimizing distributed machine learning, we propose an intelligent user defined data partitioning approach that could impact on the accuracy of distributed machine learners on average. Our results show better average AUC as compared to average AUC obtained after applying random data partitioning which gives no control to user over data partitioning. It improves the average accuracy of distributed learner due to the domain specific intelligent partitioning by the user. Our customized approach achieves 0.66 AUC on individual sessions and 0.75 AUC on mixed sessions, whereas random / uncontrolled data distribution records 0.63 AUC.

  1. On-line transient stability assessment of large-scale power systems by using ball vector machines

    International Nuclear Information System (INIS)

    Mohammadi, M.; Gharehpetian, G.B.

    2010-01-01

    In this paper ball vector machine (BVM) has been used for on-line transient stability assessment of large-scale power systems. To classify the system transient security status, a BVM has been trained for all contingencies. The proposed BVM based security assessment algorithm has very small training time and space in comparison with artificial neural networks (ANN), support vector machines (SVM) and other machine learning based algorithms. In addition, the proposed algorithm has less support vectors (SV) and therefore is faster than existing algorithms for on-line applications. One of the main points, to apply a machine learning method is feature selection. In this paper, a new Decision Tree (DT) based feature selection technique has been presented. The proposed BVM based algorithm has been applied to New England 39-bus power system. The simulation results show the effectiveness and the stability of the proposed method for on-line transient stability assessment procedure of large-scale power system. The proposed feature selection algorithm has been compared with different feature selection algorithms. The simulation results demonstrate the effectiveness of the proposed feature algorithm.

  2. Visuomotor Dissociation in Cerebral Scaling of Size.

    Science.gov (United States)

    Potgieser, Adriaan R E; de Jong, Bauke M

    2016-01-01

    Estimating size and distance is crucial in effective visuomotor control. The concept of an internal coordinate system implies that visual and motor size parameters are scaled onto a common template. To dissociate perceptual and motor components in such scaling, we performed an fMRI experiment in which 16 right-handed subjects copied geometric figures while the result of drawing remained out of sight. Either the size of the example figure varied while maintaining a constant size of drawing (visual incongruity) or the size of the examples remained constant while subjects were instructed to make changes in size (motor incongruity). These incongruent were compared to congruent conditions. Statistical Parametric Mapping (SPM8) revealed brain activations related to size incongruity in the dorsolateral prefrontal and inferior parietal cortex, pre-SMA / anterior cingulate and anterior insula, dominant in the right hemisphere. This pattern represented simultaneous use of a 'resized' virtual template and actual picture information requiring spatial working memory, early-stage attention shifting and inhibitory control. Activations were strongest in motor incongruity while right pre-dorsal premotor activation specifically occurred in this condition. Visual incongruity additionally relied on a ventral visual pathway. Left ventral premotor activation occurred in all variably sized drawing while constant visuomotor size, compared to congruent size variation, uniquely activated the lateral occipital cortex additional to superior parietal regions. These results highlight size as a fundamental parameter in both general hand movement and movement guided by objects perceived in the context of surrounding 3D space.

  3. Teraflop-scale Incremental Machine Learning

    OpenAIRE

    Özkural, Eray

    2011-01-01

    We propose a long-term memory design for artificial general intelligence based on Solomonoff's incremental machine learning methods. We use R5RS Scheme and its standard library with a few omissions as the reference machine. We introduce a Levin Search variant based on Stochastic Context Free Grammar together with four synergistic update algorithms that use the same grammar as a guiding probability distribution of programs. The update algorithms include adjusting production probabilities, re-u...

  4. A comparative analysis of support vector machines and extreme learning machines.

    Science.gov (United States)

    Liu, Xueyi; Gao, Chuanhou; Li, Ping

    2012-09-01

    The theory of extreme learning machines (ELMs) has recently become increasingly popular. As a new learning algorithm for single-hidden-layer feed-forward neural networks, an ELM offers the advantages of low computational cost, good generalization ability, and ease of implementation. Hence the comparison and model selection between ELMs and other kinds of state-of-the-art machine learning approaches has become significant and has attracted many research efforts. This paper performs a comparative analysis of the basic ELMs and support vector machines (SVMs) from two viewpoints that are different from previous works: one is the Vapnik-Chervonenkis (VC) dimension, and the other is their performance under different training sample sizes. It is shown that the VC dimension of an ELM is equal to the number of hidden nodes of the ELM with probability one. Additionally, their generalization ability and computational complexity are exhibited with changing training sample size. ELMs have weaker generalization ability than SVMs for small sample but can generalize as well as SVMs for large sample. Remarkably, great superiority in computational speed especially for large-scale sample problems is found in ELMs. The results obtained can provide insight into the essential relationship between them, and can also serve as complementary knowledge for their past experimental and theoretical comparisons. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. The influence of negative training set size on machine learning-based virtual screening.

    Science.gov (United States)

    Kurczab, Rafał; Smusz, Sabina; Bojarski, Andrzej J

    2014-01-01

    The paper presents a thorough analysis of the influence of the number of negative training examples on the performance of machine learning methods. The impact of this rather neglected aspect of machine learning methods application was examined for sets containing a fixed number of positive and a varying number of negative examples randomly selected from the ZINC database. An increase in the ratio of positive to negative training instances was found to greatly influence most of the investigated evaluating parameters of ML methods in simulated virtual screening experiments. In a majority of cases, substantial increases in precision and MCC were observed in conjunction with some decreases in hit recall. The analysis of dynamics of those variations let us recommend an optimal composition of training data. The study was performed on several protein targets, 5 machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest) and 2 types of molecular fingerprints (MACCS and CDK FP). The most effective classification was provided by the combination of CDK FP with SMO or Random Forest algorithms. The Naïve Bayes models appeared to be hardly sensitive to changes in the number of negative instances in the training set. In conclusion, the ratio of positive to negative training instances should be taken into account during the preparation of machine learning experiments, as it might significantly influence the performance of particular classifier. What is more, the optimization of negative training set size can be applied as a boosting-like approach in machine learning-based virtual screening.

  6. Bypassing the Kohn-Sham equations with machine learning.

    Science.gov (United States)

    Brockherde, Felix; Vogt, Leslie; Li, Li; Tuckerman, Mark E; Burke, Kieron; Müller, Klaus-Robert

    2017-10-11

    Last year, at least 30,000 scientific papers used the Kohn-Sham scheme of density functional theory to solve electronic structure problems in a wide variety of scientific fields. Machine learning holds the promise of learning the energy functional via examples, bypassing the need to solve the Kohn-Sham equations. This should yield substantial savings in computer time, allowing larger systems and/or longer time-scales to be tackled, but attempts to machine-learn this functional have been limited by the need to find its derivative. The present work overcomes this difficulty by directly learning the density-potential and energy-density maps for test systems and various molecules. We perform the first molecular dynamics simulation with a machine-learned density functional on malonaldehyde and are able to capture the intramolecular proton transfer process. Learning density models now allows the construction of accurate density functionals for realistic molecular systems.Machine learning allows electronic structure calculations to access larger system sizes and, in dynamical simulations, longer time scales. Here, the authors perform such a simulation using a machine-learned density functional that avoids direct solution of the Kohn-Sham equations.

  7. Large-scale ocean connectivity and planktonic body size

    KAUST Repository

    Villarino, Ernesto; Watson, James R.; Jö nsson, Bror; Gasol, Josep M.; Salazar, Guillem; Acinas, Silvia G.; Estrada, Marta; Massana, Ramó n; Logares, Ramiro; Giner, Caterina R.; Pernice, Massimo C.; Olivar, M. Pilar; Citores, Leire; Corell, Jon; Rodrí guez-Ezpeleta, Naiara; Acuñ a, José Luis; Molina-Ramí rez, Axayacatl; Gonzá lez-Gordillo, J. Ignacio; Có zar, André s; Martí , Elisa; Cuesta, José A.; Agusti, Susana; Fraile-Nuez, Eugenio; Duarte, Carlos M.; Irigoien, Xabier; Chust, Guillem

    2018-01-01

    Global patterns of planktonic diversity are mainly determined by the dispersal of propagules with ocean currents. However, the role that abundance and body size play in determining spatial patterns of diversity remains unclear. Here we analyse spatial community structure - β-diversity - for several planktonic and nektonic organisms from prokaryotes to small mesopelagic fishes collected during the Malaspina 2010 Expedition. β-diversity was compared to surface ocean transit times derived from a global circulation model, revealing a significant negative relationship that is stronger than environmental differences. Estimated dispersal scales for different groups show a negative correlation with body size, where less abundant large-bodied communities have significantly shorter dispersal scales and larger species spatial turnover rates than more abundant small-bodied plankton. Our results confirm that the dispersal scale of planktonic and micro-nektonic organisms is determined by local abundance, which scales with body size, ultimately setting global spatial patterns of diversity.

  8. Large-scale ocean connectivity and planktonic body size

    KAUST Repository

    Villarino, Ernesto

    2018-01-04

    Global patterns of planktonic diversity are mainly determined by the dispersal of propagules with ocean currents. However, the role that abundance and body size play in determining spatial patterns of diversity remains unclear. Here we analyse spatial community structure - β-diversity - for several planktonic and nektonic organisms from prokaryotes to small mesopelagic fishes collected during the Malaspina 2010 Expedition. β-diversity was compared to surface ocean transit times derived from a global circulation model, revealing a significant negative relationship that is stronger than environmental differences. Estimated dispersal scales for different groups show a negative correlation with body size, where less abundant large-bodied communities have significantly shorter dispersal scales and larger species spatial turnover rates than more abundant small-bodied plankton. Our results confirm that the dispersal scale of planktonic and micro-nektonic organisms is determined by local abundance, which scales with body size, ultimately setting global spatial patterns of diversity.

  9. Visuomotor Dissociation in Cerebral Scaling of Size.

    Directory of Open Access Journals (Sweden)

    Adriaan R E Potgieser

    Full Text Available Estimating size and distance is crucial in effective visuomotor control. The concept of an internal coordinate system implies that visual and motor size parameters are scaled onto a common template. To dissociate perceptual and motor components in such scaling, we performed an fMRI experiment in which 16 right-handed subjects copied geometric figures while the result of drawing remained out of sight. Either the size of the example figure varied while maintaining a constant size of drawing (visual incongruity or the size of the examples remained constant while subjects were instructed to make changes in size (motor incongruity. These incongruent were compared to congruent conditions. Statistical Parametric Mapping (SPM8 revealed brain activations related to size incongruity in the dorsolateral prefrontal and inferior parietal cortex, pre-SMA / anterior cingulate and anterior insula, dominant in the right hemisphere. This pattern represented simultaneous use of a 'resized' virtual template and actual picture information requiring spatial working memory, early-stage attention shifting and inhibitory control. Activations were strongest in motor incongruity while right pre-dorsal premotor activation specifically occurred in this condition. Visual incongruity additionally relied on a ventral visual pathway. Left ventral premotor activation occurred in all variably sized drawing while constant visuomotor size, compared to congruent size variation, uniquely activated the lateral occipital cortex additional to superior parietal regions. These results highlight size as a fundamental parameter in both general hand movement and movement guided by objects perceived in the context of surrounding 3D space.

  10. Decreased attention to object size information in scale errors performers.

    Science.gov (United States)

    Grzyb, Beata J; Cangelosi, Angelo; Cattani, Allegra; Floccia, Caroline

    2017-05-01

    Young children sometimes make serious attempts to perform impossible actions on miniature objects as if they were full-size objects. The existing explanations of these curious action errors assume (but never explicitly tested) children's decreased attention to object size information. This study investigated the attention to object size information in scale errors performers. Two groups of children aged 18-25 months (N=52) and 48-60 months (N=23) were tested in two consecutive tasks: an action task that replicated the original scale errors elicitation situation, and a looking task that involved watching on a computer screen actions performed with adequate to inadequate size object. Our key finding - that children performing scale errors in the action task subsequently pay less attention to size changes than non-scale errors performers in the looking task - suggests that the origins of scale errors in childhood operate already at the perceptual level, and not at the action level. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Finite-size scaling in two-dimensional superfluids

    International Nuclear Information System (INIS)

    Schultka, N.; Manousakis, E.

    1994-01-01

    Using the x-y model and a nonlocal updating scheme called cluster Monte Carlo, we calculate the superfluid density of a two-dimensional superfluid on large-size square lattices LxL up to 400x400. This technique allows us to approach temperatures close to the critical point, and by studying a wide range of L values and applying finite-size scaling theory we are able to extract the critical properties of the system. We calculate the superfluid density and from that we extract the renormalization-group beta function. We derive finite-size scaling expressions using the Kosterlitz-Thouless-Nelson renormalization group equations and show that they are in very good agreement with our numerical results. This allows us to extrapolate our results to the infinite-size limit. We also find that the universal discontinuity of the superfluid density at the critical temperature is in very good agreement with the Kosterlitz-Thouless-Nelson calculation and experiments

  12. Size scaling of static friction.

    Science.gov (United States)

    Braun, O M; Manini, Nicola; Tosatti, Erio

    2013-02-22

    Sliding friction across a thin soft lubricant film typically occurs by stick slip, the lubricant fully solidifying at stick, yielding and flowing at slip. The static friction force per unit area preceding slip is known from molecular dynamics (MD) simulations to decrease with increasing contact area. That makes the large-size fate of stick slip unclear and unknown; its possible vanishing is important as it would herald smooth sliding with a dramatic drop of kinetic friction at large size. Here we formulate a scaling law of the static friction force, which for a soft lubricant is predicted to decrease as f(m)+Δf/A(γ) for increasing contact area A, with γ>0. Our main finding is that the value of f(m), controlling the survival of stick slip at large size, can be evaluated by simulations of comparably small size. MD simulations of soft lubricant sliding are presented, which verify this theory.

  13. Dynamic Modeling and Analysis of the Large-Scale Rotary Machine with Multi-Supporting

    Directory of Open Access Journals (Sweden)

    Xuejun Li

    2011-01-01

    Full Text Available The large-scale rotary machine with multi-supporting, such as rotary kiln and rope laying machine, is the key equipment in the architectural, chemistry, and agriculture industries. The body, rollers, wheels, and bearings constitute a chain multibody system. Axis line deflection is a vital parameter to determine mechanics state of rotary machine, thus body axial vibration needs to be studied for dynamic monitoring and adjusting of rotary machine. By using the Riccati transfer matrix method, the body system of rotary machine is divided into many subsystems composed of three elements, namely, rigid disk, elastic shaft, and linear spring. Multiple wheel-bearing structures are simplified as springs. The transfer matrices of the body system and overall transfer equation are developed, as well as the response overall motion equation. Taken a rotary kiln as an instance, natural frequencies, modal shape, and response vibration with certain exciting axis line deflection are obtained by numerical computing. The body vibration modal curves illustrate the cause of dynamical errors in the common axis line measurement methods. The displacement response can be used for further measurement dynamical error analysis and compensation. The response overall motion equation could be applied to predict the body motion under abnormal mechanics condition, and provide theory guidance for machine failure diagnosis.

  14. Large-scale ligand-based predictive modelling using support vector machines.

    Science.gov (United States)

    Alvarsson, Jonathan; Lampa, Samuel; Schaal, Wesley; Andersson, Claes; Wikberg, Jarl E S; Spjuth, Ola

    2016-01-01

    The increasing size of datasets in drug discovery makes it challenging to build robust and accurate predictive models within a reasonable amount of time. In order to investigate the effect of dataset sizes on predictive performance and modelling time, ligand-based regression models were trained on open datasets of varying sizes of up to 1.2 million chemical structures. For modelling, two implementations of support vector machines (SVM) were used. Chemical structures were described by the signatures molecular descriptor. Results showed that for the larger datasets, the LIBLINEAR SVM implementation performed on par with the well-established libsvm with a radial basis function kernel, but with dramatically less time for model building even on modest computer resources. Using a non-linear kernel proved to be infeasible for large data sizes, even with substantial computational resources on a computer cluster. To deploy the resulting models, we extended the Bioclipse decision support framework to support models from LIBLINEAR and made our models of logD and solubility available from within Bioclipse.

  15. Spatial patterns of correlated scale size and scale color in relation to color pattern elements in butterfly wings.

    Science.gov (United States)

    Iwata, Masaki; Otaki, Joji M

    2016-02-01

    Complex butterfly wing color patterns are coordinated throughout a wing by unknown mechanisms that provide undifferentiated immature scale cells with positional information for scale color. Because there is a reasonable level of correspondence between the color pattern element and scale size at least in Junonia orithya and Junonia oenone, a single morphogenic signal may contain positional information for both color and size. However, this color-size relationship has not been demonstrated in other species of the family Nymphalidae. Here, we investigated the distribution patterns of scale size in relation to color pattern elements on the hindwings of the peacock pansy butterfly Junonia almana, together with other nymphalid butterflies, Vanessa indica and Danaus chrysippus. In these species, we observed a general decrease in scale size from the basal to the distal areas, although the size gradient was small in D. chrysippus. Scales of dark color in color pattern elements, including eyespot black rings, parafocal elements, and submarginal bands, were larger than those of their surroundings. Within an eyespot, the largest scales were found at the focal white area, although there were exceptional cases. Similarly, ectopic eyespots that were induced by physical damage on the J. almana background area had larger scales than in the surrounding area. These results are consistent with the previous finding that scale color and size coordinate to form color pattern elements. We propose a ploidy hypothesis to explain the color-size relationship in which the putative morphogenic signal induces the polyploidization (genome amplification) of immature scale cells and that the degrees of ploidy (gene dosage) determine scale color and scale size simultaneously in butterfly wings. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Development of large size NC trepanning and horning machine

    International Nuclear Information System (INIS)

    Wada, Yoshiei; Aono, Fumiaki; Siga, Toshihiko; Sudo, Eiichi; Takasa, Seiju; Fukuyama, Masaaki; Sibukawa, Koichi; Nakagawa, Hirokatu

    2010-01-01

    Due to the recent increase in world energy demand, construction of considerable number of nuclear and fossil power plant has been proceeded and is further planned. High generating capacity plant requires large forged components such as monoblock turbine rotor shafts and the dimensions of them tend to increase. Some of these components have center bore for material test, NDE and other use. In order to cope with the increase in production of these large forgings with center bores, a new trepanning machine, which exclusively bore a deep hole, was developed in JSW taking account of many accumulated experiences and know-how of experts. The machine is the world largest 400t trepanning and horning machine with numerical control and has many advantage in safety, the machining precision, machining efficiency, operability, labor-saving, and energy saving. Furthermore, transfer of the technical skill became easy through concentrated monitoring system based on numerically analysed experts' know-how. (author)

  17. Effects of dimensional size and surface roughness on service performance for a micro Laval nozzle

    International Nuclear Information System (INIS)

    Cai, Yukui; Liu, Zhanqiang; Shi, Zhenyu

    2017-01-01

    Nozzles with large and small dimensions are widely used in various industries. The main objective of this research is to investigate the effects of dimensional size and surface roughness on the service performance of a micro Laval nozzle. The variation of nozzle service performance from the conventional macro to micro scale is presented in this paper. This shows that the dimensional nozzle size has a serious effect on the nozzle gas flow friction. With the decrease of nozzle size, the velocity performance and thrust performance deteriorate. The micro nozzle performance has less sensitivity to the variation of surface roughness than the large scale nozzle does. Surface quality improvement and burr prevention technologies are proposed to reduce the friction effect on the micro nozzle performance. A novel process is then developed to control and depress the burr generation during micro nozzle machining. The polymethyl-methacrylate as a coating material is coated on the rough machined surface before finish machining. Finally, the micro nozzle with a throat diameter of 1 mm is machined successfully. Thrust test results show that the implement and application of this machining process benefit the service performance improvement of the micro nozzle. (paper)

  18. Scale economies and optimal size in the Swiss gas distribution sector

    International Nuclear Information System (INIS)

    Alaeifar, Mozhgan; Farsi, Mehdi; Filippini, Massimo

    2014-01-01

    This paper studies the cost structure of Swiss gas distribution utilities. Several econometric models are applied to a panel of 26 companies over 1996–2000. Our main objective is to estimate the optimal size and scale economies of the industry and to study their possible variation with respect to network characteristics. The results indicate the presence of unexploited scale economies. However, very large companies in the sample and companies with a disproportionate mixture of output and density present an exception. Furthermore, the estimated optimal size for majority of companies in the sample has shown a value far greater than the actual size, suggesting remarkable efficiency gains by reorganization of the industry. The results also highlight the effect of customer density on optimal size. Networks with higher density or greater complexity have a lower optimal size. - highlights: • Presence of unexploited scale economies for small and medium sized companies. • Scale economies vary considerably with customer density. • Higher density or greater complexity is associated with lower optimal size. • Optimal size varies across the companies through unobserved heterogeneity. • Firms with low density can gain more from expanding firm size

  19. Amp: A modular approach to machine learning in atomistic simulations

    Science.gov (United States)

    Khorshidi, Alireza; Peterson, Andrew A.

    2016-10-01

    Electronic structure calculations, such as those employing Kohn-Sham density functional theory or ab initio wavefunction theories, have allowed for atomistic-level understandings of a wide variety of phenomena and properties of matter at small scales. However, the computational cost of electronic structure methods drastically increases with length and time scales, which makes these methods difficult for long time-scale molecular dynamics simulations or large-sized systems. Machine-learning techniques can provide accurate potentials that can match the quality of electronic structure calculations, provided sufficient training data. These potentials can then be used to rapidly simulate large and long time-scale phenomena at similar quality to the parent electronic structure approach. Machine-learning potentials usually take a bias-free mathematical form and can be readily developed for a wide variety of systems. Electronic structure calculations have favorable properties-namely that they are noiseless and targeted training data can be produced on-demand-that make them particularly well-suited for machine learning. This paper discusses our modular approach to atomistic machine learning through the development of the open-source Atomistic Machine-learning Package (Amp), which allows for representations of both the total and atom-centered potential energy surface, in both periodic and non-periodic systems. Potentials developed through the atom-centered approach are simultaneously applicable for systems with various sizes. Interpolation can be enhanced by introducing custom descriptors of the local environment. We demonstrate this in the current work for Gaussian-type, bispectrum, and Zernike-type descriptors. Amp has an intuitive and modular structure with an interface through the python scripting language yet has parallelizable fortran components for demanding tasks; it is designed to integrate closely with the widely used Atomic Simulation Environment (ASE), which

  20. Randomized Algorithms for Scalable Machine Learning

    OpenAIRE

    Kleiner, Ariel Jacob

    2012-01-01

    Many existing procedures in machine learning and statistics are computationally intractable in the setting of large-scale data. As a result, the advent of rapidly increasing dataset sizes, which should be a boon yielding improved statistical performance, instead severely blunts the usefulness of a variety of existing inferential methods. In this work, we use randomness to ameliorate this lack of scalability by reducing complex, computationally difficult inferential problems to larger sets o...

  1. GA-4 half-scale cask model fabrication

    International Nuclear Information System (INIS)

    Meyer, R.J.

    1995-01-01

    Unique fabrication experience was gained during the construction of a half-scale model of the GA-4 Legal Weight Truck Cask. Techniques were developed for forming, welding, and machining XM-19 stainless steel. Noncircular 'rings' of depleted uranium were cast and machined to close tolerances. The noncircular cask body, gamma shield, and cavity liner were produced using a nonconventional approach in which components were first machined to final size and then welded together using a low-distortion electron beam process. Special processes were developed for fabricating the bonded aluminum honeycomb impact limiters. The innovative design of the cask internals required precision deep hole drilling, low-distortion welding, and close tolerance machining. Valuable lessons learned were documented for use in future manufacturing of full-scale prototype and production units

  2. Model of large scale man-machine systems with an application to vessel traffic control

    NARCIS (Netherlands)

    Wewerinke, P.H.; van der Ent, W.I.; ten Hove, D.

    1989-01-01

    Mathematical models are discussed to deal with complex large-scale man-machine systems such as vessel (air, road) traffic and process control systems. Only interrelationships between subsystems are assumed. Each subsystem is controlled by a corresponding human operator (HO). Because of the

  3. Inverse size scaling of the nucleolus by a concentration-dependent phase transition.

    Science.gov (United States)

    Weber, Stephanie C; Brangwynne, Clifford P

    2015-03-02

    Just as organ size typically increases with body size, the size of intracellular structures changes as cells grow and divide. Indeed, many organelles, such as the nucleus [1, 2], mitochondria [3], mitotic spindle [4, 5], and centrosome [6], exhibit size scaling, a phenomenon in which organelle size depends linearly on cell size. However, the mechanisms of organelle size scaling remain unclear. Here, we show that the size of the nucleolus, a membraneless organelle important for cell-size homeostasis [7], is coupled to cell size by an intracellular phase transition. We find that nucleolar size directly scales with cell size in early C. elegans embryos. Surprisingly, however, when embryo size is altered, we observe inverse scaling: nucleolar size increases in small cells and decreases in large cells. We demonstrate that this seemingly contradictory result arises from maternal loading of a fixed number rather than a fixed concentration of nucleolar components, which condense into nucleoli only above a threshold concentration. Our results suggest that the physics of phase transitions can dictate whether an organelle assembles, and, if so, its size, providing a mechanistic link between organelle assembly and cell size. Since the nucleolus is known to play a key role in cell growth, this biophysical readout of cell size could provide a novel feedback mechanism for growth control. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Experimental determination of the dimensionless scaling parameter of energy transport in tokamaks

    International Nuclear Information System (INIS)

    Luce, T.C.; Petty, C.C.

    1995-07-01

    Controlled fusion experiments have focused on the variation of the plasma characteristics as the engineering or control parameters are systematically changed. This has led to the development of extrapolation formulae for prediction of future device performance using these same variables as a basis. Recently, it was noticed that present-day tokamaks can operate with all of the dimensionless variables which appear in the Vlasov-Maxwell system of equations at values projected for a fusion powerplant with the exception of the parameter ρ * , the gyroradius normalized to the machine size. The scaling with this parameter is related to the benefit of increasing the size of the machine either directly or effectively by increasing the magnetic field. It is exactly this scaling which is subject to systematic error in the inter-machine databases and the cost driver for any future machine. If this scaling can be fixed by a series of single machine experiments, much as the current and power scalings have been, the confidence in the prediction of future device performance would be greatly enhanced. While carrying out experiments of this type, it was also found that the ρ * scaling can illuminate the underlying physics of energy transport. Conclusions drawn from experiments on the DIII-D tokamak in these two areas are the subject of this paper

  5. Sizing and scaling requirements of a large-scale physical model for code validation

    International Nuclear Information System (INIS)

    Khaleel, R.; Legore, T.

    1990-01-01

    Model validation is an important consideration in application of a code for performance assessment and therefore in assessing the long-term behavior of the engineered and natural barriers of a geologic repository. Scaling considerations relevant to porous media flow are reviewed. An analysis approach is presented for determining the sizing requirements of a large-scale, hydrology physical model. The physical model will be used to validate performance assessment codes that evaluate the long-term behavior of the repository isolation system. Numerical simulation results for sizing requirements are presented for a porous medium model in which the media properties are spatially uncorrelated

  6. Finite size scaling and phenomenological renormalization

    International Nuclear Information System (INIS)

    Derrida, B.; Seze, L. de; Vannimenus, J.

    1981-05-01

    The basic equations of the phenomenological renormalization method are recalled. A simple derivation using finite-size scaling is presented. The convergence of the method is studied analytically for the Ising model. Using this method we give predictions for the 2d bond percolation. Finally we discuss how the method can be applied to random systems

  7. Dynamic fatigue of a machinable glass-ceramic

    Science.gov (United States)

    Smyth, K. K.; Magida, M. B.

    1983-01-01

    To assess the stress-corrosion susceptibility of a machinable glass-ceramic, its dynamic fatigue behavior was investigated by measuring its strength as a function of stress rate. Fracture mechanics techniques were used to analyze the results for the purpose of making lifetime predictions for components of this material. This material was concluded to have only moderate resistance (N = 30) to stress corrosion in ambient conditions. The effects of specimen size on strength were assessed for the material used in this study; it was concluded that the Weibull edge-flaw scaling law adequately describes the observed strength-size relation.

  8. Finite size scaling and lattice gauge theory

    International Nuclear Information System (INIS)

    Berg, B.A.

    1986-01-01

    Finite size (Fisher) scaling is investigated for four dimensional SU(2) and SU(3) lattice gauge theories without quarks. It allows to disentangle violations of (asymptotic) scaling and finite volume corrections. Mass spectrum, string tension, deconfinement temperature and lattice β-function are considered. For appropriate volumes, Monte Carlo investigations seem to be able to control the finite volume continuum limit. Contact is made with Luescher's small volume expansion and possibly also with the asymptotic large volume behavior. 41 refs., 19 figs

  9. Size structure, not metabolic scaling rules, determines fisheries reference points

    DEFF Research Database (Denmark)

    Andersen, Ken Haste; Beyer, Jan

    2015-01-01

    Impact assessments of fishing on a stock require parameterization of vital rates: growth, mortality and recruitment. For 'data-poor' stocks, vital rates may be estimated from empirical size-based relationships or from life-history invariants. However, a theoretical framework to synthesize...... these empirical relations is lacking. Here, we combine life-history invariants, metabolic scaling and size-spectrum theory to develop a general size- and trait-based theory for demography and recruitment of exploited fish stocks. Important concepts are physiological or metabolic scaled mortalities and flux...... is that larger species have a higher egg production per recruit than small species. This means that density dependence is stronger for large than for small species and has the consequence that fisheries reference points that incorporate recruitment do not obey metabolic scaling rules. This result implies...

  10. Nanomedicine: tiny particles and machines give huge gains.

    Science.gov (United States)

    Tong, Sheng; Fine, Eli J; Lin, Yanni; Cradick, Thomas J; Bao, Gang

    2014-02-01

    Nanomedicine is an emerging field that integrates nanotechnology, biomolecular engineering, life sciences and medicine; it is expected to produce major breakthroughs in medical diagnostics and therapeutics. Nano-scale structures and devices are compatible in size with proteins and nucleic acids in living cells. Therefore, the design, characterization and application of nano-scale probes, carriers and machines may provide unprecedented opportunities for achieving a better control of biological processes, and drastic improvements in disease detection, therapy, and prevention. Recent advances in nanomedicine include the development of nanoparticle (NP)-based probes for molecular imaging, nano-carriers for drug/gene delivery, multifunctional NPs for theranostics, and molecular machines for biological and medical studies. This article provides an overview of the nanomedicine field, with an emphasis on NPs for imaging and therapy, as well as engineered nucleases for genome editing. The challenges in translating nanomedicine approaches to clinical applications are discussed.

  11. Development and psychometric evaluation of the breast size satisfaction scale.

    Science.gov (United States)

    Pahlevan Sharif, Saeed

    2017-10-09

    Purpose The purpose of this paper is to develop and evaluate psychometrically an instrument named the Breast Size Satisfaction Scale (BSSS) to assess breast size satisfaction. Design/methodology/approach The present scale was developed using a set of 16 computer-generated 3D images of breasts to overcome some of the limitations of existing instruments. The images were presented to participants and they were asked to select the figure that most accurately depicted their actual breast size and the figure that most closely represented their ideal breast size. Breast size satisfaction was computed by subtracting the absolute value of the difference between ideal and actual perceived size from 16, such that higher values indicate greater breast size satisfaction. Findings Study 1 ( n=65 female undergraduate students) showed good test-retest reliability and study 2 ( n=1,000 Iranian women, aged 18 years and above) provided support for convergent validity using a nomological network approach. Originality/value The BSSS demonstrated good psychometric properties and thus can be used in future studies to assess breast size satisfaction among women.

  12. Investigations of grain size dependent sediment transport phenomena on multiple scales

    Science.gov (United States)

    Thaxton, Christopher S.

    Sediment transport processes in coastal and fluvial environments resulting from disturbances such as urbanization, mining, agriculture, military operations, and climatic change have significant impact on local, regional, and global environments. Primarily, these impacts include the erosion and deposition of sediment, channel network modification, reduction in downstream water quality, and the delivery of chemical contaminants. The scale and spatial distribution of these effects are largely attributable to the size distribution of the sediment grains that become eligible for transport. An improved understanding of advective and diffusive grain-size dependent sediment transport phenomena will lead to the development of more accurate predictive models and more effective control measures. To this end, three studies were performed that investigated grain-size dependent sediment transport on three different scales. Discrete particle computer simulations of sheet flow bedload transport on the scale of 0.1--100 millimeters were performed on a heterogeneous population of grains of various grain sizes. The relative transport rates and diffusivities of grains under both oscillatory and uniform, steady flow conditions were quantified. These findings suggest that boundary layer formalisms should describe surface roughness through a representative grain size that is functionally dependent on the applied flow parameters. On the scale of 1--10m, experiments were performed to quantify the hydrodynamics and sediment capture efficiency of various baffles installed in a sediment retention pond, a commonly used sedimentation control measure in watershed applications. Analysis indicates that an optimum sediment capture effectiveness may be achieved based on baffle permeability, pond geometry and flow rate. Finally, on the scale of 10--1,000m, a distributed, bivariate watershed terain evolution module was developed within GRASS GIS. Simulation results for variable grain sizes and for

  13. A Study of the Resolution of Dental Intraoral X-Ray Machines

    International Nuclear Information System (INIS)

    Kim, Seon Ju; Chung, Hyon De

    1990-01-01

    The purpose of this study was to assess the resolution and focal spot size of dental X-ray machines. Fifty dental X-ray machines were selected for measuring resolution and focal spot size. These machines were used in general dental clinics. The time on installation of the X-ray machine varies from 1 years to 10 years. The resolution of these machines was measured with the test pattern. The focal spot size of these machines was measured with the star test pattern. The following results were obtained: 1. The resolution of dental intraoral X-ray machines was not significantly changed in ten years. 2. The focal spot size of dental intraoral X-ray machines was not significantly increased in ten years. The statistical analysis between the mean focal spot size and nominal focal spot size was significant at the 0.05 level about the more than 3 years used machines.

  14. Deconfinement phase transition and finite-size scaling in SU(2) lattice gauge theory

    International Nuclear Information System (INIS)

    Mogilevskij, O.A.

    1988-01-01

    Calculation technique for deconfinement phase transition parameters based on application of finite-size scaling theory is suggested. The essence of the technique lies in plotting of universal scaling function on the basis of numerical data obtained at different-size final lattices and discrimination of phase transition parameters for infinite lattice system. Finite-size scaling technique was developed as applied to spin system theory. β critical index for Polyakov loop and SU(2) deconfinement temperature of lattice gauge theory are calculated on the basis of finite-size scaling technique. The obtained value agrees with critical index of magnetization in Ising three-dimensional model

  15. Finite-size scaling a collection of reprints

    CERN Document Server

    1988-01-01

    Over the past few years, finite-size scaling has become an increasingly important tool in studies of critical systems. This is partly due to an increased understanding of finite-size effects by analytical means, and partly due to our ability to treat larger systems with large computers. The aim of this volume was to collect those papers which have been important for this progress and which illustrate novel applications of the method. The emphasis has been placed on relatively recent developments, including the use of the &egr;-expansion and of conformal methods.

  16. A decomposition heuristics based on multi-bottleneck machines for large-scale job shop scheduling problems

    Directory of Open Access Journals (Sweden)

    Yingni Zhai

    2014-10-01

    Full Text Available Purpose: A decomposition heuristics based on multi-bottleneck machines for large-scale job shop scheduling problems (JSP is proposed.Design/methodology/approach: In the algorithm, a number of sub-problems are constructed by iteratively decomposing the large-scale JSP according to the process route of each job. And then the solution of the large-scale JSP can be obtained by iteratively solving the sub-problems. In order to improve the sub-problems' solving efficiency and the solution quality, a detection method for multi-bottleneck machines based on critical path is proposed. Therewith the unscheduled operations can be decomposed into bottleneck operations and non-bottleneck operations. According to the principle of “Bottleneck leads the performance of the whole manufacturing system” in TOC (Theory Of Constraints, the bottleneck operations are scheduled by genetic algorithm for high solution quality, and the non-bottleneck operations are scheduled by dispatching rules for the improvement of the solving efficiency.Findings: In the process of the sub-problems' construction, partial operations in the previous scheduled sub-problem are divided into the successive sub-problem for re-optimization. This strategy can improve the solution quality of the algorithm. In the process of solving the sub-problems, the strategy that evaluating the chromosome's fitness by predicting the global scheduling objective value can improve the solution quality.Research limitations/implications: In this research, there are some assumptions which reduce the complexity of the large-scale scheduling problem. They are as follows: The processing route of each job is predetermined, and the processing time of each operation is fixed. There is no machine breakdown, and no preemption of the operations is allowed. The assumptions should be considered if the algorithm is used in the actual job shop.Originality/value: The research provides an efficient scheduling method for the

  17. Tipping the scales: Evolution of the allometric slope independent of average trait size.

    Science.gov (United States)

    Stillwell, R Craig; Shingleton, Alexander W; Dworkin, Ian; Frankino, W Anthony

    2016-02-01

    The scaling of body parts is central to the expression of morphology across body sizes and to the generation of morphological diversity within and among species. Although patterns of scaling-relationship evolution have been well documented for over one hundred years, little is known regarding how selection acts to generate these patterns. In part, this is because it is unclear the extent to which the elements of log-linear scaling relationships-the intercept or mean trait size and the slope-can evolve independently. Here, using the wing-body size scaling relationship in Drosophila melanogaster as an empirical model, we use artificial selection to demonstrate that the slope of a morphological scaling relationship between an organ (the wing) and body size can evolve independently of mean organ or body size. We discuss our findings in the context of how selection likely operates on morphological scaling relationships in nature, the developmental basis for evolved changes in scaling, and the general approach of using individual-based selection experiments to study the expression and evolution of morphological scaling. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.

  18. The influence of the negative-positive ratio and screening database size on the performance of machine learning-based virtual screening.

    Science.gov (United States)

    Kurczab, Rafał; Bojarski, Andrzej J

    2017-01-01

    The machine learning-based virtual screening of molecular databases is a commonly used approach to identify hits. However, many aspects associated with training predictive models can influence the final performance and, consequently, the number of hits found. Thus, we performed a systematic study of the simultaneous influence of the proportion of negatives to positives in the testing set, the size of screening databases and the type of molecular representations on the effectiveness of classification. The results obtained for eight protein targets, five machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest), two types of molecular fingerprints (MACCS and CDK FP) and eight screening databases with different numbers of molecules confirmed our previous findings that increases in the ratio of negative to positive training instances greatly influenced most of the investigated parameters of the ML methods in simulated virtual screening experiments. However, the performance of screening was shown to also be highly dependent on the molecular library dimension. Generally, with the increasing size of the screened database, the optimal training ratio also increased, and this ratio can be rationalized using the proposed cost-effectiveness threshold approach. To increase the performance of machine learning-based virtual screening, the training set should be constructed in a way that considers the size of the screening database.

  19. A general model for the scaling of offspring size and adult size.

    Science.gov (United States)

    Falster, Daniel S; Moles, Angela T; Westoby, Mark

    2008-09-01

    Understanding evolutionary coordination among different life-history traits is a key challenge for ecology and evolution. Here we develop a general quantitative model predicting how offspring size should scale with adult size by combining a simple model for life-history evolution with a frequency-dependent survivorship model. The key innovation is that larger offspring are afforded three different advantages during ontogeny: higher survivorship per time, a shortened juvenile phase, and advantage during size-competitive growth. In this model, it turns out that size-asymmetric advantage during competition is the factor driving evolution toward larger offspring sizes. For simplified and limiting cases, the model is shown to produce the same predictions as the previously existing theory on which it is founded. The explicit treatment of different survival advantages has biologically important new effects, mainly through an interaction between total maternal investment in reproduction and the duration of competitive growth. This goes on to explain alternative allometries between log offspring size and log adult size, as observed in mammals (slope = 0.95) and plants (slope = 0.54). Further, it suggests how these differences relate quantitatively to specific biological processes during recruitment. In these ways, the model generalizes across previous theory and provides explanations for some differences between major taxa.

  20. Scaling up liquid state machines to predict over address events from dynamic vision sensors.

    Science.gov (United States)

    Kaiser, Jacques; Stal, Rainer; Subramoney, Anand; Roennau, Arne; Dillmann, Rüdiger

    2017-09-01

    Short-term visual prediction is important both in biology and robotics. It allows us to anticipate upcoming states of the environment and therefore plan more efficiently. In theoretical neuroscience, liquid state machines have been proposed as a biologically inspired method to perform asynchronous prediction without a model. However, they have so far only been demonstrated in simulation or small scale pre-processed camera images. In this paper, we use a liquid state machine to predict over the whole  [Formula: see text]  event stream provided by a real dynamic vision sensor (DVS, or silicon retina). Thanks to the event-based nature of the DVS, the liquid is constantly fed with data when an object is in motion, fully embracing the asynchronicity of spiking neural networks. We propose a smooth continuous representation of the event stream for the short-term visual prediction task. Moreover, compared to previous works (2002 Neural Comput. 2525 282-93 and Burgsteiner H et al 2007 Appl. Intell. 26 99-109), we scale the input dimensionality that the liquid operates on by two order of magnitudes. We also expose the current limits of our method by running experiments in a challenging environment where multiple objects are in motion. This paper is a step towards integrating biologically inspired algorithms derived in theoretical neuroscience to real world robotic setups. We believe that liquid state machines could complement current prediction algorithms used in robotics, especially when dealing with asynchronous sensors.

  1. Machine technology: a survey

    International Nuclear Information System (INIS)

    Barbier, M.M.

    1981-01-01

    An attempt was made to find existing machines that have been upgraded and that could be used for large-scale decontamination operations outdoors. Such machines are in the building industry, the mining industry, and the road construction industry. The road construction industry has yielded the machines in this presentation. A review is given of operations that can be done with the machines available

  2. Finite size scaling and spectral density studies

    International Nuclear Information System (INIS)

    Berg, B.A.

    1991-01-01

    Finite size scaling (FSS) and spectral density (SD) studies are reported for the deconfining phase transition. This talk concentrates on Monte Carlo (MC) results for pure SU(3) gauge theory, obtained in collaboration with Alves and Sanielevici, but the methods are expected to be useful for full QCD as well. (orig.)

  3. Estimating Most Productive Scale Size in Data Envelopment Analysis with Integer Value Data

    Science.gov (United States)

    Dwi Sari, Yunita; Angria S, Layla; Efendi, Syahril; Zarlis, Muhammad

    2018-01-01

    The most productive scale size (MPSS) is a measurement that states how resources should be organized and utilized to achieve optimal results. The most productive scale size (MPSS) can be used as a benchmark for the success of an industry or company in producing goods or services. To estimate the most productive scale size (MPSS), each decision making unit (DMU) should pay attention the level of input-output efficiency, by data envelopment analysis (DEA) method decision making unit (DMU) can identify units used as references that can help to find the cause and solution from inefficiencies can optimize productivity that main advantage in managerial applications. Therefore, data envelopment analysis (DEA) is chosen to estimating most productive scale size (MPSS) that will focus on the input of integer value data with the CCR model and the BCC model. The purpose of this research is to find the best solution for estimating most productive scale size (MPSS) with input of integer value data in data envelopment analysis (DEA) method.

  4. Support Vector Machines Trained with Evolutionary Algorithms Employing Kernel Adatron for Large Scale Classification of Protein Structures.

    Science.gov (United States)

    Arana-Daniel, Nancy; Gallegos, Alberto A; López-Franco, Carlos; Alanís, Alma Y; Morales, Jacob; López-Franco, Adriana

    2016-01-01

    With the increasing power of computers, the amount of data that can be processed in small periods of time has grown exponentially, as has the importance of classifying large-scale data efficiently. Support vector machines have shown good results classifying large amounts of high-dimensional data, such as data generated by protein structure prediction, spam recognition, medical diagnosis, optical character recognition and text classification, etc. Most state of the art approaches for large-scale learning use traditional optimization methods, such as quadratic programming or gradient descent, which makes the use of evolutionary algorithms for training support vector machines an area to be explored. The present paper proposes an approach that is simple to implement based on evolutionary algorithms and Kernel-Adatron for solving large-scale classification problems, focusing on protein structure prediction. The functional properties of proteins depend upon their three-dimensional structures. Knowing the structures of proteins is crucial for biology and can lead to improvements in areas such as medicine, agriculture and biofuels.

  5. Perception of acoustic scale and size in musical instrument sounds.

    Science.gov (United States)

    van Dinther, Ralph; Patterson, Roy D

    2006-10-01

    There is size information in natural sounds. For example, as humans grow in height, their vocal tracts increase in length, producing a predictable decrease in the formant frequencies of speech sounds. Recent studies have shown that listeners can make fine discriminations about which of two speakers has the longer vocal tract, supporting the view that the auditory system discriminates changes on the acoustic-scale dimension. Listeners can also recognize vowels scaled well beyond the range of vocal tracts normally experienced, indicating that perception is robust to changes in acoustic scale. This paper reports two perceptual experiments designed to extend research on acoustic scale and size perception to the domain of musical sounds: The first study shows that listeners can discriminate the scale of musical instrument sounds reliably, although not quite as well as for voices. The second experiment shows that listeners can recognize the family of an instrument sound which has been modified in pitch and scale beyond the range of normal experience. We conclude that processing of acoustic scale in music perception is very similar to processing of acoustic scale in speech perception.

  6. Transient characteristics of current lead losses for the large scale high-temperature superconducting rotating machine

    International Nuclear Information System (INIS)

    Le, T. D.; Kim, J. H.; Park, S. I.; Kim, D. J.; Kim, H. M.; Lee, H. G.; Yoon, Y. S.; Jo, Y. S.; Yoon, K. Y.

    2014-01-01

    To minimize most heat loss of current lead for high-temperature superconducting (HTS) rotating machine, the choice of conductor properties and lead geometry - such as length, cross section, and cooling surface area - are one of the various significant factors must be selected. Therefore, an optimal lead for large scale of HTS rotating machine has presented before. Not let up with these trends, this paper continues to improve of diminishing heat loss for HTS part according to different model. It also determines the simplification conditions for an evaluation of the main flux flow loss and eddy current loss transient characteristics during charging and discharging period.

  7. Impedance Scaling and Impedance Control

    International Nuclear Information System (INIS)

    Chou, W.; Griffin, J.

    1997-06-01

    When a machine becomes really large, such as the Very Large Hadron Collider (VLHC), of which the circumference could reach the order of megameters, beam instability could be an essential bottleneck. This paper studies the scaling of the instability threshold vs. machine size when the coupling impedance scales in a ''normal'' way. It is shown that the beam would be intrinsically unstable for the VLHC. As a possible solution to this problem, it is proposed to introduce local impedance inserts for controlling the machine impedance. In the longitudinal plane, this could be done by using a heavily detuned rf cavity (e.g., a biconical structure), which could provide large imaginary impedance with the right sign (i.e., inductive or capacitive) while keeping the real part small. In the transverse direction, a carefully designed variation of the cross section of a beam pipe could generate negative impedance that would partially compensate the transverse impedance in one plane

  8. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    Science.gov (United States)

    Dednam, W.; Botha, A. E.

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution

  9. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    International Nuclear Information System (INIS)

    Dednam, W; Botha, A E

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution

  10. Beliefs about penis size: validation of a scale for men ashamed about their penis size.

    Science.gov (United States)

    Veale, David; Eshkevari, Ertimiss; Read, Julie; Miles, Sarah; Troglia, Andrea; Phillips, Rachael; Echeverria, Lina Maria Carmona; Fiorito, Chiara; Wylie, Kevan; Muir, Gordon

    2014-01-01

    No measures are available for understanding beliefs in men who experience shame about the perceived size of their penis. Such a measure might be helpful for treatment planning, and measuring outcome after any psychological or physical intervention. Our aim was to validate a newly developed measure called the Beliefs about Penis Size Scale (BAPS). One hundred seventy-three male participants completed a new questionnaire consisting of 18 items to be validated and developed into the BAPS, as well as various other standardized measures. A urologist also measured actual penis size. The BAPS was validated against six psychosexual self-report questionnaires as well as penile size measurements. Exploratory factor analysis reduced the number of items in the BAPS from 18 to 10, which was best explained by one factor. The 10-item BAPS had good internal consistency and correlated significantly with measures of depression, anxiety, body image quality of life, social anxiety, erectile function, overall satisfaction, and the importance attached to penis size. The BAPS was not found to correlate with actual penis size. It was able to discriminate between those who had concerns or were dissatisfied about their penis size and those who were not. This is the first study to develop a scale for measurement of beliefs about penis size. It may be used as part of an assessment for men who experience shame about the perceived size of their penis and as an outcome measure after treatment. The BAPS measures various manifestations of masculinity and shame about their perceived penis size including internal self-evaluative beliefs; negative evaluation by others; anticipated consequences of a perceived small penis, and extreme self-consciousness. © 2013 International Society for Sexual Medicine.

  11. Size-density scaling in protists and the links between consumer-resource interaction parameters.

    Science.gov (United States)

    DeLong, John P; Vasseur, David A

    2012-11-01

    Recent work indicates that the interaction between body-size-dependent demographic processes can generate macroecological patterns such as the scaling of population density with body size. In this study, we evaluate this possibility for grazing protists and also test whether demographic parameters in these models are correlated after controlling for body size. We compiled data on the body-size dependence of consumer-resource interactions and population density for heterotrophic protists grazing algae in laboratory studies. We then used nested dynamic models to predict both the height and slope of the scaling relationship between population density and body size for these protists. We also controlled for consumer size and assessed links between model parameters. Finally, we used the models and the parameter estimates to assess the individual- and population-level dependence of resource use on body-size and prey-size selection. The predicted size-density scaling for all models matched closely to the observed scaling, and the simplest model was sufficient to predict the pattern. Variation around the mean size-density scaling relationship may be generated by variation in prey productivity and area of capture, but residuals are relatively insensitive to variation in prey size selection. After controlling for body size, many consumer-resource interaction parameters were correlated, and a positive correlation between residual prey size selection and conversion efficiency neutralizes the apparent fitness advantage of taking large prey. Our results indicate that widespread community-level patterns can be explained with simple population models that apply consistently across a range of sizes. They also indicate that the parameter space governing the dynamics and the steady states in these systems is structured such that some parts of the parameter space are unlikely to represent real systems. Finally, predator-prey size ratios represent a kind of conundrum, because they are

  12. High-precision micro/nano-scale machining system

    Science.gov (United States)

    Kapoor, Shiv G.; Bourne, Keith Allen; DeVor, Richard E.

    2014-08-19

    A high precision micro/nanoscale machining system. A multi-axis movement machine provides relative movement along multiple axes between a workpiece and a tool holder. A cutting tool is disposed on a flexible cantilever held by the tool holder, the tool holder being movable to provide at least two of the axes to set the angle and distance of the cutting tool relative to the workpiece. A feedback control system uses measurement of deflection of the cantilever during cutting to maintain a desired cantilever deflection and hence a desired load on the cutting tool.

  13. The scaling of human interactions with city size.

    Science.gov (United States)

    Schläpfer, Markus; Bettencourt, Luís M A; Grauwin, Sébastian; Raschke, Mathias; Claxton, Rob; Smoreda, Zbigniew; West, Geoffrey B; Ratti, Carlo

    2014-09-06

    The size of cities is known to play a fundamental role in social and economic life. Yet, its relation to the structure of the underlying network of human interactions has not been investigated empirically in detail. In this paper, we map society-wide communication networks to the urban areas of two European countries. We show that both the total number of contacts and the total communication activity grow superlinearly with city population size, according to well-defined scaling relations and resulting from a multiplicative increase that affects most citizens. Perhaps surprisingly, however, the probability that an individual's contacts are also connected with each other remains largely unaffected. These empirical results predict a systematic and scale-invariant acceleration of interaction-based spreading phenomena as cities get bigger, which is numerically confirmed by applying epidemiological models to the studied networks. Our findings should provide a microscopic basis towards understanding the superlinear increase of different socioeconomic quantities with city size, that applies to almost all urban systems and includes, for instance, the creation of new inventions or the prevalence of certain contagious diseases. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  14. A Mathematical Model for Scheduling a Batch Processing Machine with Multiple Incompatible Job Families, Non-identical Job dimensions, Non-identical Job sizes, Non-agreeable release times and due dates

    International Nuclear Information System (INIS)

    Ramasubramaniam, M; Mathirajan, M

    2013-01-01

    The paper addresses the problem scheduling a batch processing machine with multiple incompatible job families, non-identical job dimensions, non-identical job sizes and non-agreeable release dates to minimize makespan. The research problem is solved by proposing a mixed integer programming model that appropriately takes into account the parameters considered in the problem. The proposed is validated using a numerical example. The experiment conducted show that the model can pose significant difficulties in solving the large scale instances. The paper concludes by giving the scope for future work and some alternative approaches one can use for solving these class of problems.

  15. Hunting for Hydrothermal Vents at the Local-Scale Using AUV's and Machine-Learning Classification in the Earth's Oceans

    Science.gov (United States)

    White, S. M.

    2018-05-01

    New AUV-based mapping technology coupled with machine-learning methods for detecting individual vents and vent fields at the local-scale raise the possibility of understanding the geologic controls on hydrothermal venting.

  16. Trap-size scaling in confined-particle systems at quantum transitions

    International Nuclear Information System (INIS)

    Campostrini, Massimo; Vicari, Ettore

    2010-01-01

    We develop a trap-size scaling theory for trapped particle systems at quantum transitions. As a theoretical laboratory, we consider a quantum XY chain in an external transverse field acting as a trap for the spinless fermions of its quadratic Hamiltonian representation. We discuss trap-size scaling at the Mott insulator to superfluid transition in the Bose-Hubbard model. We present exact and accurate numerical results for the XY chain and for the low-density Mott transition in the hard-core limit of the one-dimensional Bose-Hubbard model. Our results are relevant for systems of cold atomic gases in optical lattices.

  17. Risk estimation using probability machines

    Science.gov (United States)

    2014-01-01

    Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306

  18. Percolation through voids around overlapping spheres: A dynamically based finite-size scaling analysis

    Science.gov (United States)

    Priour, D. J.

    2014-01-01

    The percolation threshold for flow or conduction through voids surrounding randomly placed spheres is calculated. With large-scale Monte Carlo simulations, we give a rigorous continuum treatment to the geometry of the impenetrable spheres and the spaces between them. To properly exploit finite-size scaling, we examine multiple systems of differing sizes, with suitable averaging over disorder, and extrapolate to the thermodynamic limit. An order parameter based on the statistical sampling of stochastically driven dynamical excursions and amenable to finite-size scaling analysis is defined, calculated for various system sizes, and used to determine the critical volume fraction ϕc=0.0317±0.0004 and the correlation length exponent ν =0.92±0.05.

  19. Topological and sizing optimization of reinforced ribs for a machining centre

    Science.gov (United States)

    Chen, T. Y.; Wang, C. B.

    2008-01-01

    The topology optimization technique is applied to improve rib designs of a machining centre. The ribs of the original design are eliminated and new ribs are generated by topology optimization in the same 3D design space containing the original ribs. Two-dimensional plate elements are used to replace the optimum rib topologies formed by 3D rectangular elements. After topology optimization, sizing optimization is used to determine the optimum thicknesses of the ribs. When forming the optimum design problem, multiple configurations of the structure are considered simultaneously. The objective is to minimize rib weight. Static constraints confine displacements of the cutting tool and the workpiece due to cutting forces and the heat generated by spindle bearings. The dynamic constraint requires the fundamental natural frequency of the structure to be greater than a given value in order to reduce dynamic deflection. Compared with the original design, the improvement resulting from this approach is significant.

  20. Size scaling of negative hydrogen ion sources for fusion

    Science.gov (United States)

    Fantz, U.; Franzen, P.; Kraus, W.; Schiesko, L.; Wimmer, C.; Wünderlich, D.

    2015-04-01

    The RF-driven negative hydrogen ion source (H-, D-) for the international fusion experiment ITER has a width of 0.9 m and a height of 1.9 m and is based on a ⅛ scale prototype source being in operation at the IPP test facilities BATMAN and MANITU for many years. Among the challenges to meet the required parameters in a caesiated source at a source pressure of 0.3 Pa or less is the challenge in size scaling of a factor of eight. As an intermediate step a ½ scale ITER source went into operation at the IPP test facility ELISE with the first plasma in February 2013. The experience and results gained so far at ELISE allowed a size scaling study from the prototype source towards the ITER relevant size at ELISE, in which operational issues, physical aspects and the source performance is addressed, highlighting differences as well as similarities. The most ITER relevant results are: low pressure operation down to 0.2 Pa is possible without problems; the magnetic filter field created by a current in the plasma grid is sufficient to reduce the electron temperature below the target value of 1 eV and to reduce together with the bias applied between the differently shaped bias plate and the plasma grid the amount of co-extracted electrons. An asymmetry of the co-extracted electron currents in the two grid segments is measured, varying strongly with filter field and bias. Contrary to the prototype source, a dedicated plasma drift in vertical direction is not observed. As in the prototype source, the performance in deuterium is limited by the amount of co-extracted electrons in short as well as in long pulse operation. Caesium conditioning is much harder in deuterium than in hydrogen for which fast and reproducible conditioning is achieved. First estimates reveal a caesium consumption comparable to the one in the prototype source despite the large size.

  1. Size scaling of negative hydrogen ion sources for fusion

    International Nuclear Information System (INIS)

    Fantz, U.; Franzen, P.; Kraus, W.; Schiesko, L.; Wimmer, C.; Wünderlich, D.

    2015-01-01

    The RF-driven negative hydrogen ion source (H − , D − ) for the international fusion experiment ITER has a width of 0.9 m and a height of 1.9 m and is based on a ⅛ scale prototype source being in operation at the IPP test facilities BATMAN and MANITU for many years. Among the challenges to meet the required parameters in a caesiated source at a source pressure of 0.3 Pa or less is the challenge in size scaling of a factor of eight. As an intermediate step a ½ scale ITER source went into operation at the IPP test facility ELISE with the first plasma in February 2013. The experience and results gained so far at ELISE allowed a size scaling study from the prototype source towards the ITER relevant size at ELISE, in which operational issues, physical aspects and the source performance is addressed, highlighting differences as well as similarities. The most ITER relevant results are: low pressure operation down to 0.2 Pa is possible without problems; the magnetic filter field created by a current in the plasma grid is sufficient to reduce the electron temperature below the target value of 1 eV and to reduce together with the bias applied between the differently shaped bias plate and the plasma grid the amount of co-extracted electrons. An asymmetry of the co-extracted electron currents in the two grid segments is measured, varying strongly with filter field and bias. Contrary to the prototype source, a dedicated plasma drift in vertical direction is not observed. As in the prototype source, the performance in deuterium is limited by the amount of co-extracted electrons in short as well as in long pulse operation. Caesium conditioning is much harder in deuterium than in hydrogen for which fast and reproducible conditioning is achieved. First estimates reveal a caesium consumption comparable to the one in the prototype source despite the large size

  2. Zooniverse - Web scale citizen science with people and machines. (Invited)

    Science.gov (United States)

    Smith, A.; Lynn, S.; Lintott, C.; Simpson, R.

    2013-12-01

    The Zooniverse (zooniverse.org) began in 2007 with the launch of Galaxy Zoo, a project in which more than 175,000 people provided shape analyses of more than 1 million galaxy images sourced from the Sloan Digital Sky Survey. These galaxy 'classifications', some 60 million in total, have since been used to produce more than 50 peer-reviewed publications based not only on the original research goals of the project but also because of serendipitous discoveries made by the volunteer community. Based upon the success of Galaxy Zoo the team have gone on to develop more than 25 web-based citizen science projects, all with a strong research focus in a range of subjects from astronomy to zoology where human-based analysis still exceeds that of machine intelligence. Over the past 6 years Zooniverse projects have collected more than 300 million data analyses from over 1 million volunteers providing fantastically rich datasets for not only the individuals working to produce research from their project but also the machine learning and computer vision research communities. The Zooniverse platform has always been developed to be the 'simplest thing that works' implementing only the most rudimentary algorithms for functionality such as task allocation and user-performance metrics - simplifications necessary to scale the Zooniverse such that the core team of developers and data scientists can remain small and the cost of running the computing infrastructure relatively modest. To date these simplifications have been appropriate for the data volumes and analysis tasks being addressed. This situation however is changing: next generation telescopes such as the Large Synoptic Sky Telescope (LSST) will produce data volumes dwarfing those previously analyzed. If citizen science is to have a part to play in analyzing these next-generation datasets then the Zooniverse will need to evolve into a smarter system capable for example of modeling the abilities of users and the complexities of

  3. Does water transport scale universally with tree size?

    Science.gov (United States)

    F.C. Meinzer; B.J. Bond; J.M. Warren; D.R. Woodruff

    2005-01-01

    1. We employed standardized measurement techniques and protocols to describe the size dependence of whole-tree water use and cross-sectional area of conducting xylem (sapwood) among several species of angiosperms and conifers. 2. The results were not inconsistent with previously proposed 314-power scaling of water transport with estimated above-...

  4. Finite-size scaling of survival probability in branching processes

    OpenAIRE

    Garcia-Millan, Rosalba; Font-Clos, Francesc; Corral, Alvaro

    2014-01-01

    Branching processes pervade many models in statistical physics. We investigate the survival probability of a Galton-Watson branching process after a finite number of generations. We reveal the finite-size scaling law of the survival probability for a given branching process ruled by a probability distribution of the number of offspring per element whose standard deviation is finite, obtaining the exact scaling function as well as the critical exponents. Our findings prove the universal behavi...

  5. Surface mining machines problems of maintenance and modernization

    CERN Document Server

    Rusiński, Eugeniusz; Moczko, Przemysław; Pietrusiak, Damian

    2017-01-01

    This unique volume imparts practical information on the operation, maintenance, and modernization of heavy performance machines such as lignite mine machines, bucket wheel excavators, and spreaders. Problems of large scale machines (mega machines) are highly specific and not well recognized in the common mechanical engineering environment. Prof. Rusiński and his co-authors identify solutions that increase the durability of these machines as well as discuss methods of failure analysis and technical condition assessment procedures. "Surface Mining Machines: Problems in Maintenance and Modernization" stands as a much-needed guidebook for engineers facing the particular challenges of heavy performance machines and offers a distinct and interesting demonstration of scale-up issues for researchers and scientists from across the fields of machine design and mechanical engineering.

  6. Stochastic scheduling on unrelated machines

    NARCIS (Netherlands)

    Skutella, Martin; Sviridenko, Maxim; Uetz, Marc Jochen

    2013-01-01

    Two important characteristics encountered in many real-world scheduling problems are heterogeneous machines/processors and a certain degree of uncertainty about the actual sizes of jobs. The first characteristic entails machine dependent processing times of jobs and is captured by the classical

  7. Finite-size scaling of survival probability in branching processes.

    Science.gov (United States)

    Garcia-Millan, Rosalba; Font-Clos, Francesc; Corral, Álvaro

    2015-04-01

    Branching processes pervade many models in statistical physics. We investigate the survival probability of a Galton-Watson branching process after a finite number of generations. We derive analytically the existence of finite-size scaling for the survival probability as a function of the control parameter and the maximum number of generations, obtaining the critical exponents as well as the exact scaling function, which is G(y)=2ye(y)/(e(y)-1), with y the rescaled distance to the critical point. Our findings are valid for any branching process of the Galton-Watson type, independently of the distribution of the number of offspring, provided its variance is finite. This proves the universal behavior of the finite-size effects in branching processes, including the universality of the metric factors. The direct relation to mean-field percolation is also discussed.

  8. Development of an electrically operated cassava slicing machine

    Directory of Open Access Journals (Sweden)

    I. S. Aji

    2013-08-01

    Full Text Available Labor input in manual cassava chips processing is very high and product quality is low. This paper presents the design and construction of an electrically operated cassava slicing machine that requires only one person to operate. Efficiency, portability, ease of operation, corrosion prevention of slicing component of the machine, force required to slice a cassava tuber, capacity of 10 kg/min and uniformity in the size of the cassava chips were considered in the design and fabrication of the machine. The performance of the machine was evaluated with cassava of average length and diameter of 253 mm and 60 mm respectively at an average speed of 154 rpm. The machine produced 5.3 kg of chips of 10 mm length and 60 mm diameter in 1 minute. The efficiency of the machine was 95.6% with respect to the quantity of the input cassava. The chips were found to be well chipped to the designed thickness, shape and of generally similar size. Galvanized steel sheets were used in the cutting section to avoid corrosion of components. The machine is portable and easy to operate which can be adopted for cassava processing in a medium size industry.

  9. Large-scale machine learning and evaluation platform for real-time traffic surveillance

    Science.gov (United States)

    Eichel, Justin A.; Mishra, Akshaya; Miller, Nicholas; Jankovic, Nicholas; Thomas, Mohan A.; Abbott, Tyler; Swanson, Douglas; Keller, Joel

    2016-09-01

    In traffic engineering, vehicle detectors are trained on limited datasets, resulting in poor accuracy when deployed in real-world surveillance applications. Annotating large-scale high-quality datasets is challenging. Typically, these datasets have limited diversity; they do not reflect the real-world operating environment. There is a need for a large-scale, cloud-based positive and negative mining process and a large-scale learning and evaluation system for the application of automatic traffic measurements and classification. The proposed positive and negative mining process addresses the quality of crowd sourced ground truth data through machine learning review and human feedback mechanisms. The proposed learning and evaluation system uses a distributed cloud computing framework to handle data-scaling issues associated with large numbers of samples and a high-dimensional feature space. The system is trained using AdaBoost on 1,000,000 Haar-like features extracted from 70,000 annotated video frames. The trained real-time vehicle detector achieves an accuracy of at least 95% for 1/2 and about 78% for 19/20 of the time when tested on ˜7,500,000 video frames. At the end of 2016, the dataset is expected to have over 1 billion annotated video frames.

  10. Observations of the auroral width spectrum at kilometre-scale size

    Directory of Open Access Journals (Sweden)

    N. Partamies

    2010-03-01

    Full Text Available This study examines auroral colour camera data from the Canadian Dense Array Imaging SYstem (DAISY. The Dense Array consists of three imagers with different narrow (compared to all-sky view field-of-view optics. The main scientific motivation arises from an earlier study by Knudsen et al. (2001 who used All-Sky Imager (ASI combined with even earlier TV camera observations (Maggs and Davis, 1968 to suggest that there is a gap in the distribution of auroral arc widths at around 1 km. With DAISY observations we are able to show that the gap is an instrument artifact and due to limited spatial resolution and coverage of commonly used instrumentation, namely ASIs and TV cameras. If the auroral scale size spectrum is indeed continuous, the mechanisms forming these structures should be able to produce all of the different scale sizes. So far, such a single process has not been proposed in the literature and very few models are designed to interact with each other even though the range of their favourable conditions do overlap. All scale-sizes should be considered in the future studies of auroral forms and electron acceleration regions, both in observational and theoretical approaches.

  11. Machinability of IPS Empress 2 framework ceramic.

    Science.gov (United States)

    Schmidt, C; Weigl, P

    2000-01-01

    Using ceramic materials for an automatic production of ceramic dentures by CAD/CAM is a challenge, because many technological, medical, and optical demands must be considered. The IPS Empress 2 framework ceramic meets most of them. This study shows the possibilities for machining this ceramic with economical parameters. The long life-time requirement for ceramic dentures requires a ductile machined surface to avoid the well-known subsurface damages of brittle materials caused by machining. Slow and rapid damage propagation begins at break outs and cracks, and limits life-time significantly. Therefore, ductile machined surfaces are an important demand for machine dental ceramics. The machining tests were performed with various parameters such as tool grain size and feed speed. Denture ceramics were machined by jig grinding on a 5-axis CNC milling machine (Maho HGF 500) with a high-speed spindle up to 120,000 rpm. The results of the wear test indicate low tool wear. With one tool, you can machine eight occlusal surfaces including roughing and finishing. One occlusal surface takes about 60 min machining time. Recommended parameters for roughing are middle diamond grain size (D107), cutting speed v(c) = 4.7 m/s, feed speed v(ft) = 1000 mm/min, depth of cut a(e) = 0.06 mm, width of contact a(p) = 0.8 mm, and for finishing ultra fine diamond grain size (D46), cutting speed v(c) = 4.7 m/s, feed speed v(ft) = 100 mm/min, depth of cut a(e) = 0.02 mm, width of contact a(p) = 0.8 mm. The results of the machining tests give a reference for using IPS Empress(R) 2 framework ceramic in CAD/CAM systems. Copyright 2000 John Wiley & Sons, Inc.

  12. Large-scale Machine Learning in High-dimensional Datasets

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen

    Over the last few decades computers have gotten to play an essential role in our daily life, and data is now being collected in various domains at a faster pace than ever before. This dissertation presents research advances in four machine learning fields that all relate to the challenges imposed...... are better at modeling local heterogeneities. In the field of machine learning for neuroimaging, we introduce learning protocols for real-time functional Magnetic Resonance Imaging (fMRI) that allow for dynamic intervention in the human decision process. Specifically, the model exploits the structure of f...

  13. Scaling in Free-Swimming Fish and Implications for Measuring Size-at-Time in the Wild

    Science.gov (United States)

    Broell, Franziska; Taggart, Christopher T.

    2015-01-01

    This study was motivated by the need to measure size-at-age, and thus growth rate, in fish in the wild. We postulated that this could be achieved using accelerometer tags based first on early isometric scaling models that hypothesize that similar animals should move at the same speed with a stroke frequency that scales with length-1, and second on observations that the speed of primarily air-breathing free-swimming animals, presumably swimming ‘efficiently’, is independent of size, confirming that stroke frequency scales as length-1. However, such scaling relations between size and swimming parameters for fish remain mostly theoretical. Based on free-swimming saithe and sturgeon tagged with accelerometers, we introduce a species-specific scaling relationship between dominant tail beat frequency (TBF) and fork length. Dominant TBF was proportional to length-1 (r2 = 0.73, n = 40), and estimated swimming speed within species was independent of length. Similar scaling relations accrued in relation to body mass-0.29. We demonstrate that the dominant TBF can be used to estimate size-at-time and that accelerometer tags with onboard processing may be able to provide size-at-time estimates among free-swimming fish and thus the estimation of growth rate (change in size-at-time) in the wild. PMID:26673777

  14. Scaling in Free-Swimming Fish and Implications for Measuring Size-at-Time in the Wild.

    Directory of Open Access Journals (Sweden)

    Franziska Broell

    Full Text Available This study was motivated by the need to measure size-at-age, and thus growth rate, in fish in the wild. We postulated that this could be achieved using accelerometer tags based first on early isometric scaling models that hypothesize that similar animals should move at the same speed with a stroke frequency that scales with length-1, and second on observations that the speed of primarily air-breathing free-swimming animals, presumably swimming 'efficiently', is independent of size, confirming that stroke frequency scales as length-1. However, such scaling relations between size and swimming parameters for fish remain mostly theoretical. Based on free-swimming saithe and sturgeon tagged with accelerometers, we introduce a species-specific scaling relationship between dominant tail beat frequency (TBF and fork length. Dominant TBF was proportional to length-1 (r2 = 0.73, n = 40, and estimated swimming speed within species was independent of length. Similar scaling relations accrued in relation to body mass-0.29. We demonstrate that the dominant TBF can be used to estimate size-at-time and that accelerometer tags with onboard processing may be able to provide size-at-time estimates among free-swimming fish and thus the estimation of growth rate (change in size-at-time in the wild.

  15. Investigation the gas film in micro scale induced error on the performance of the aerostatic spindle in ultra-precision machining

    Science.gov (United States)

    Chen, Dongju; Huo, Chen; Cui, Xianxian; Pan, Ri; Fan, Jinwei; An, Chenhui

    2018-05-01

    The objective of this work is to study the influence of error induced by gas film in micro-scale on the static and dynamic behavior of a shaft supported by the aerostatic bearings. The static and dynamic balance models of the aerostatic bearing are presented by the calculated stiffness and damping in micro scale. The static simulation shows that the deformation of aerostatic spindle system in micro scale is decreased. For the dynamic behavior, both the stiffness and damping in axial and radial directions are increased in micro scale. The experiments of the stiffness and rotation error of the spindle show that the deflection of the shaft resulting from the calculating parameters in the micro scale is very close to the deviation of the spindle system. The frequency information in transient analysis is similar to the actual test, and they are also higher than the results from the traditional case without considering micro factor. Therefore, it can be concluded that the value considering micro factor is closer to the actual work case of the aerostatic spindle system. These can provide theoretical basis for the design and machining process of machine tools.

  16. Real-time wavelet-based inline banknote-in-bundle counting for cut-and-bundle machines

    Science.gov (United States)

    Petker, Denis; Lohweg, Volker; Gillich, Eugen; Türke, Thomas; Willeke, Harald; Lochmüller, Jens; Schaede, Johannes

    2011-03-01

    Automatic banknote sheet cut-and-bundle machines are widely used within the scope of banknote production. Beside the cutting-and-bundling, which is a mature technology, image-processing-based quality inspection for this type of machine is attractive. We present in this work a new real-time Touchless Counting and perspective cutting blade quality insurance system, based on a Color-CCD-Camera and a dual-core Computer, for cut-and-bundle applications in banknote production. The system, which applies Wavelet-based multi-scale filtering is able to count banknotes inside a 100-bundle within 200-300 ms depending on the window size.

  17. Power Scaling of Petroleum Field Sizes and Movie Box Office Earnings.

    Science.gov (United States)

    Haley, J. A.; Barton, C. C.

    2017-12-01

    The size-cumulative frequency distribution of petroleum fields has long been shown to be power scaling, Mandelbrot, 1963, and Barton and Scholz, 1995. The scaling exponents for petroleum field volumes range from 0.8 to 1.08 worldwide and are used to assess the size and number of undiscovered fields. The size-cumulative frequency distribution of movie box office earnings also exhibits a power scaling distribution for domestic, overseas, and worldwide gross box office earnings for the top 668 earning movies released between 1939 and 2016 (http://www.boxofficemojo.com/alltime/). Box office earnings were reported in the dollars-of-the-day and were converted to 2015 U.S. dollars using the U.S. consumer price index (CPI) for domestic and overseas earnings. Because overseas earnings are not reported by country and there is no single inflation index appropriate for all overseas countries. Adjusting the box office earnings using the CPI index has two effects on the power functions fit. The first is that the scaling exponent has a narrow range (2.3 - 2.5) between the three data sets; and second, the scatter of the data points fit by the power function is reduced. The scaling exponents for the adjusted value are; 2.3 for domestic box office earnings, 2.5 for overseas box office earnings, and 2.5 worldwide box office earnings. The smaller the scaling exponent the greater the proportion of all earnings is contributed by a smaller proportion of all the movies: where E = P (a-2)/(a-1) where E is the percentage of earnings, P is the percentage of all movies in the data set. The scaling exponents for box office earnings (2.3 - 2.5) means that approximately 20% of the top earning movies contribute 70-55% of all the earnings for domestic, worldwide earnings respectively.

  18. Size Reduction Techniques for Large Scale Permanent Magnet Generators in Wind Turbines

    Science.gov (United States)

    Khazdozian, Helena; Hadimani, Ravi; Jiles, David

    2015-03-01

    Increased wind penetration is necessary to reduce U.S. dependence on fossil fuels, combat climate change and increase national energy security. The U.S Department of Energy has recommended large scale and offshore wind turbines to achieve 20% wind electricity generation by 2030. Currently, geared doubly-fed induction generators (DFIGs) are typically employed in the drivetrain for conversion of mechanical to electrical energy. Yet, gearboxes account for the greatest downtime of wind turbines, decreasing reliability and contributing to loss of profit. Direct drive permanent magnet generators (PMGs) offer a reliable alternative to DFIGs by eliminating the gearbox. However, PMGs scale up in size and weight much more rapidly than DFIGs as rated power is increased, presenting significant challenges for large scale wind turbine application. Thus, size reduction techniques are needed for viability of PMGs in large scale wind turbines. Two size reduction techniques are presented. It is demonstrated that 25% size reduction of a 10MW PMG is possible with a high remanence theoretical permanent magnet. Additionally, the use of a Halbach cylinder in an outer rotor PMG is investigated to focus magnetic flux over the rotor surface in order to increase torque. This work was supported by the National Science Foundation under Grant No. 1069283 and a Barbara and James Palmer Endowment at Iowa State University.

  19. Rotor scale model tests for power conversion unit of GT-MHR

    Energy Technology Data Exchange (ETDEWEB)

    Baxi, C.B.; Daugherty, R.; Shenoy, A. [General Atomics, 3550 General Atomics Court, CA (United States); Kodochigov, N.G.; Belov, S.E. [Experimental Design Bureau of Machine Building, N. Novgorad, RF (United States)

    2007-07-01

    The gas-turbine modular helium reactor (GT-MHR) combines a modular high-temperature gas-cooled reactor with a closed Brayton gas-turbine cycle power conversion unit (PCU) for thermal to electric energy conversion. The PCU has a vertical orientation and is supported on electromagnetic bearings (EMB). The Rotor Scale Model (RSM) Tests are intended to model directly the control of EMB and rotor-dynamic characteristics of the full-scale GT-MHR Turbo-machine. The objectives of the RSM tests are to: -1) confirm the EMB control system design for the GT-MHR turbo-machine over the full-range of operation, -2) confirm the redundancy and on-line maintainability features that have been specified for the EMBs, -3) provide a benchmark for validation of analytical tools that will be used for independent analyses of the EMB subsystem design, -4) provide experience with the installation, operation and maintenance of EMBs supporting multiple rotors with flexible couplings. As with the full-scale turbo-machine, the RSM will incorporate two rotors that are joined by a flexible coupling. Each of the rotors will be supported on one axial and two radial EMBs. Additional devices, similar in concept to radial EMBs, will be installed to simulate magnetic and/or mechanical forces representing those that would be seen by the exciter, generator, compressors and turbine. Overall, the length of the RSM rotor is about 1/3 that of the full-scale turbo-machine, while the diameter is approximately 1/5 scale. The design and sizing of the rotor is such that the number of critical speeds in the RSM are the same as in the full-scale turbo-machine. The EMBs will also be designed such that their response to rotor-dynamic forces is representative of the full-scale turbo-machine. (authors)

  20. Equilibrium and off-equilibrium trap-size scaling in one-dimensional ultracold bosonic gases

    International Nuclear Information System (INIS)

    Campostrini, Massimo; Vicari, Ettore

    2010-01-01

    We study some aspects of equilibrium and off-equilibrium quantum dynamics of dilute bosonic gases in the presence of a trapping potential. We consider systems with a fixed number of particles and study their scaling behavior with increasing the trap size. We focus on one-dimensional bosonic systems, such as gases described by the Lieb-Liniger model and its Tonks-Girardeau limit of impenetrable bosons, and gases constrained in optical lattices as described by the Bose-Hubbard model. We study their quantum (zero-temperature) behavior at equilibrium and off equilibrium during the unitary time evolution arising from changes of the trapping potential, which may be instantaneous or described by a power-law time dependence, starting from the equilibrium ground state for an initial trap size. Renormalization-group scaling arguments and analytical and numerical calculations show that the trap-size dependence of the equilibrium and off-equilibrium dynamics can be cast in the form of a trap-size scaling in the low-density regime, characterized by universal power laws of the trap size, in dilute gases with repulsive contact interactions and lattice systems described by the Bose-Hubbard model. The scaling functions corresponding to several physically interesting observables are computed. Our results are of experimental relevance for systems of cold atomic gases trapped by tunable confining potentials.

  1. Synchronization in scale-free networks: The role of finite-size effects

    Science.gov (United States)

    Torres, D.; Di Muro, M. A.; La Rocca, C. E.; Braunstein, L. A.

    2015-06-01

    Synchronization problems in complex networks are very often studied by researchers due to their many applications to various fields such as neurobiology, e-commerce and completion of tasks. In particular, scale-free networks with degree distribution P(k)∼ k-λ , are widely used in research since they are ubiquitous in Nature and other real systems. In this paper we focus on the surface relaxation growth model in scale-free networks with 2.5< λ <3 , and study the scaling behavior of the fluctuations, in the steady state, with the system size N. We find a novel behavior of the fluctuations characterized by a crossover between two regimes at a value of N=N* that depends on λ: a logarithmic regime, found in previous research, and a constant regime. We propose a function that describes this crossover, which is in very good agreement with the simulations. We also find that, for a system size above N* , the fluctuations decrease with λ, which means that the synchronization of the system improves as λ increases. We explain this crossover analyzing the role of the network's heterogeneity produced by the system size N and the exponent of the degree distribution.

  2. Machinability of a Stainless Steel by Electrochemical Discharge Microdrilling

    International Nuclear Information System (INIS)

    Coteata, Margareta; Pop, Nicolae; Slatineanu, Laurentiu; Schulze, Hans-Peter; Besliu, Irina

    2011-01-01

    Due to the chemical elements included in their structure for ensuring an increased resistance to the environment action, the stainless steels are characterized by a low machinability when classical machining methods are applied. For this reason, sometimes non-traditional machining methods are applied, one of these being the electrochemical discharge machining. To obtain microholes and to evaluate the machinability by electrochemical discharge microdrilling, test pieces of stainless steel were used for experimental research. The electrolyte was an aqueous solution of sodium silicate with different densities. A complete factorial plan was designed to highlight the influence of some input variables on the sizes of the considered machinability indexes (electrode tool wear, material removal rate, depth of the machined hole). By mathematically processing of experimental data, empirical functions were established both for stainless steel and carbon steel. Graphical representations were used to obtain more suggestive vision concerning the influence exerted by the considered input variables on the size of the machinability indexes.

  3. An HTS machine laboratory prototype

    DEFF Research Database (Denmark)

    Mijatovic, Nenad; Jensen, Bogi Bech; Træholt, Chresten

    2012-01-01

    This paper describes Superwind HTS machine laboratory setup which is a small scale HTS machine designed and build as a part of the efforts to identify and tackle some of the challenges the HTS machine design may face. One of the challenges of HTS machines is a Torque Transfer Element (TTE) which...... conduction compared to a shaft. The HTS machine was successfully cooled to 77K and tests have been performed. The IV curves of the HTS field winding employing 6 HTS coils indicate that two of the coils had been damaged. The maximal value of the torque during experiments of 78Nm was recorded. Loaded with 33...

  4. Machine Accounting. An Instructor's Guide.

    Science.gov (United States)

    Gould, E. Noah, Ed.

    Designed to prepare students to operate the types of accounting machines used in many medium-sized businesses, this instructor's guide presents a full-year high school course in machine accounting covering 120 hours of instruction. An introduction for the instructor suggests how to adapt the guide to present a 60-hour module which would be…

  5. Making extreme computations possible with virtual machines

    International Nuclear Information System (INIS)

    Reuter, J.; Chokoufe Nejad, B.

    2016-02-01

    State-of-the-art algorithms generate scattering amplitudes for high-energy physics at leading order for high-multiplicity processes as compiled code (in Fortran, C or C++). For complicated processes the size of these libraries can become tremendous (many GiB). We show that amplitudes can be translated to byte-code instructions, which even reduce the size by one order of magnitude. The byte-code is interpreted by a Virtual Machine with runtimes comparable to compiled code and a better scaling with additional legs. We study the properties of this algorithm, as an extension of the Optimizing Matrix Element Generator (O'Mega). The bytecode matrix elements are available as alternative input for the event generator WHIZARD. The bytecode interpreter can be implemented very compactly, which will help with a future implementation on massively parallel GPUs.

  6. Size effect studies on smooth tensile specimens at room temperature and 400 oC

    International Nuclear Information System (INIS)

    Krompholz, K.; Kamber, J.; Groth, E.; Kalkhof, D.

    2000-06-01

    One of the objectives of the REVISA project (REactor Vessel Integrity in Severe Accidents) is to assess the size effect related to deformation and failure models as well as material data under quasistatic and dynamic conditions in homogeneous and non-homogeneous states of strain. For these investigations the reactor pressure vessel material 20 MnMoNi 55 was selected. It was subjected to a size effect study on smooth scaled tensile specimens of three sizes. Two strain rates (2*10 -5 /s and 10 -3 /s) and two temperatures (room temperature and 400 o C) were selected. The investigations are aimed at a support for a gradient plasticity approach to size effects. Test on the small specimens (diameters 3 and 9 mm) were performed at an electromechanical test machine, while the large specimens (diameter 30 mm) had to be tested at a servohydraulical closed loop test machine with a force capacity of 1000 kN

  7. Top-spray fluid bed coating: Scale-up in terms of relative droplet size and drying force

    DEFF Research Database (Denmark)

    Hede, Peter Dybdahl; Bach, P.; Jensen, Anker Degn

    2008-01-01

    in terms of particle size fractions larger than 425 mu m determined by sieve analysis. Results indicated that the particle size distribution may be reproduced across scale with statistical valid precision by keeping the drying force and the relative droplet size constant across scale. It is also shown...

  8. Scale and size effects in dynamic fracture of concretes and rocks

    Directory of Open Access Journals (Sweden)

    Petrov Y.

    2015-01-01

    Full Text Available Structural-temporal approach based on the notion of incubation time is used for interpretation of strain-rate effects in the fracture process of concretes and rocks. It is established that temporal dependences of concretes and rocks are calculated by the incubation time criterion. Experimentally observed different relations between ultimate stresses of concrete and mortar in static and dynamic conditions are explained. It is obtained that compressive strength of mortar at a low strain rate is greater than that of concrete, but at a high strain rate the opposite is true. Influence of confinement pressure on the mechanism of dynamic strength for concretes and rocks is discussed. Both size effect and scale effect for concrete and rocks samples subjected to impact loading are analyzed. Statistical nature of a size effect contrasts to a scale effect that is related to the definition of a spatio-temporal representative volume determining the fracture event on the given scale level.

  9. How acoustic signals scale with individual body size: common trends across diverse taxa

    OpenAIRE

    Rafael L. Rodríguez; Marcelo Araya-Salas; David A. Gray; Michael S. Reichert; Laurel B. Symes; Matthew R. Wilkins; Rebecca J. Safran; Gerlinde Höbel

    2015-01-01

    We use allometric analysis to explore how acoustic signals scale on individual body size and to test hypotheses about the factors shaping relationships between signals and body size. Across case studies spanning birds, crickets, tree crickets, and tree frogs, we find that most signal traits had low coefficients of variation, shallow allometric scalings, and little dispersion around the allometric function. We relate variation in these measures to the shape of mate preferences and the level of...

  10. A method of size inspection for fruit with machine vision

    Science.gov (United States)

    Rao, Xiuqin; Ying, Yibin

    2005-11-01

    A real time machine vision system for fruit quality inspection was developed, which consists of rollers, an encoder, a lighting chamber, a TMS-7DSP CCD camera (PULNIX Inc.), a computer (P4 1.8G, 128M) and a set of grading controller. An image was binary, and the edge was detected with line-scanned based digit image description, and the MER was applied to detected size of the fruit, but failed. The reason for the result was that the test point with MER was different from which was done with vernier caliper. An improved method was developed, which was called as software vernier caliper. A line between weight O of the fruit and a point A on the edge was drawn, and then the crossed point between line OA and the edge was calculated, which was noted as B, a point C between AB was selected, and the point D on the other side was searched by a way to make CD was vertical to AB, by move the point C between point A and B, A new point D was searched. The maximum length of CD was recorded as an extremum value. By move point A from start to the half point on the edge, a serial of CD was gotten. 80 navel oranges were tested, the maximum error of the diameter was less than 1mm.

  11. Comparative adoption of cone beam computed tomography and panoramic radiography machines across Australia.

    Science.gov (United States)

    Zhang, A; Critchley, S; Monsour, P A

    2016-12-01

    The aim of the present study was to assess the current adoption of cone beam computed tomography (CBCT) and panoramic radiography (PR) machines across Australia. Information regarding registered CBCT and PR machines was obtained from radiation regulators across Australia. The number of X-ray machines was correlated with the population size, the number of dentists, and the gross state product (GSP) per capita, to determine the best fitting regression model(s). In 2014, there were 232 CBCT and 1681 PR machines registered in Australia. Based on absolute counts, Queensland had the largest number of CBCT and PR machines whereas the Northern Territory had the smallest number. However, when based on accessibility in terms of the population size and the number of dentists, the Australian Capital Territory had the most CBCT machines and Western Australia had the most PR machines. The number of X-ray machines correlated strongly with both the population size and the number of dentists, but not with the GSP per capita. In 2014, the ratio of PR to CBCT machines was approximately 7:1. Projected increases in either the population size or the number of dentists could positively impact on the adoption of PR and CBCT machines in Australia. © 2016 Australian Dental Association.

  12. Dynamic cellular manufacturing system considering machine failure and workload balance

    Science.gov (United States)

    Rabbani, Masoud; Farrokhi-Asl, Hamed; Ravanbakhsh, Mohammad

    2018-02-01

    Machines are a key element in the production system and their failure causes irreparable effects in terms of cost and time. In this paper, a new multi-objective mathematical model for dynamic cellular manufacturing system (DCMS) is provided with consideration of machine reliability and alternative process routes. In this dynamic model, we attempt to resolve the problem of integrated family (part/machine cell) formation as well as the operators' assignment to the cells. The first objective minimizes the costs associated with the DCMS. The second objective optimizes the labor utilization and, finally, a minimum value of the variance of workload between different cells is obtained by the third objective function. Due to the NP-hard nature of the cellular manufacturing problem, the problem is initially validated by the GAMS software in small-sized problems, and then the model is solved by two well-known meta-heuristic methods including non-dominated sorting genetic algorithm and multi-objective particle swarm optimization in large-scaled problems. Finally, the results of the two algorithms are compared with respect to five different comparison metrics.

  13. Scale-Dependent Habitat Selection and Size-Based Dominance in Adult Male American Alligators.

    Directory of Open Access Journals (Sweden)

    Bradley A Strickland

    Full Text Available Habitat selection is an active behavioral process that may vary across spatial and temporal scales. Animals choose an area of primary utilization (i.e., home range then make decisions focused on resource needs within patches. Dominance may affect the spatial distribution of conspecifics and concomitant habitat selection. Size-dependent social dominance hierarchies have been documented in captive alligators, but evidence is lacking from wild populations. We studied habitat selection for adult male American alligators (Alligator mississippiensis; n = 17 on the Pearl River in central Mississippi, USA, to test whether habitat selection was scale-dependent and individual resource selectivity was a function of conspecific body size. We used K-select analysis to quantify selection at the home range scale and patches within the home range to determine selection congruency and important habitat variables. In addition, we used linear models to determine if body size was related to selection patterns and strengths. Our results indicated habitat selection of adult male alligators was a scale-dependent process. Alligators demonstrated greater overall selection for habitat variables at the patch level and less at the home range level, suggesting resources may not be limited when selecting a home range for animals in our study area. Further, diurnal habitat selection patterns may depend on thermoregulatory needs. There was no relationship between resource selection or home range size and body size, suggesting size-dependent dominance hierarchies may not have influenced alligator resource selection or space use in our sample. Though apparent habitat suitability and low alligator density did not manifest in an observed dominance hierarchy, we hypothesize that a change in either could increase intraspecific interactions, facilitating a dominance hierarchy. Due to the broad and diverse ecological roles of alligators, understanding the factors that influence their

  14. Scale-dependent habitat selection and size-based dominance in adult male American alligators

    Science.gov (United States)

    Strickland, Bradley A.; Vilella, Francisco; Belant, Jerrold L.

    2016-01-01

    Habitat selection is an active behavioral process that may vary across spatial and temporal scales. Animals choose an area of primary utilization (i.e., home range) then make decisions focused on resource needs within patches. Dominance may affect the spatial distribution of conspecifics and concomitant habitat selection. Size-dependent social dominance hierarchies have been documented in captive alligators, but evidence is lacking from wild populations. We studied habitat selection for adult male American alligators (Alligator mississippiensis; n = 17) on the Pearl River in central Mississippi, USA, to test whether habitat selection was scale-dependent and individual resource selectivity was a function of conspecific body size. We used K-select analysis to quantify selection at the home range scale and patches within the home range to determine selection congruency and important habitat variables. In addition, we used linear models to determine if body size was related to selection patterns and strengths. Our results indicated habitat selection of adult male alligators was a scale-dependent process. Alligators demonstrated greater overall selection for habitat variables at the patch level and less at the home range level, suggesting resources may not be limited when selecting a home range for animals in our study area. Further, diurnal habitat selection patterns may depend on thermoregulatory needs. There was no relationship between resource selection or home range size and body size, suggesting size-dependent dominance hierarchies may not have influenced alligator resource selection or space use in our sample. Though apparent habitat suitability and low alligator density did not manifest in an observed dominance hierarchy, we hypothesize that a change in either could increase intraspecific interactions, facilitating a dominance hierarchy. Due to the broad and diverse ecological roles of alligators, understanding the factors that influence their social dominance

  15. LHC Report: machine development

    CERN Multimedia

    Rogelio Tomás García for the LHC team

    2015-01-01

    Machine development weeks are carefully planned in the LHC operation schedule to optimise and further study the performance of the machine. The first machine development session of Run 2 ended on Saturday, 25 July. Despite various hiccoughs, it allowed the operators to make great strides towards improving the long-term performance of the LHC.   The main goals of this first machine development (MD) week were to determine the minimum beam-spot size at the interaction points given existing optics and collimation constraints; to test new beam instrumentation; to evaluate the effectiveness of performing part of the beam-squeezing process during the energy ramp; and to explore the limits on the number of protons per bunch arising from the electromagnetic interactions with the accelerator environment and the other beam. Unfortunately, a series of events reduced the machine availability for studies to about 50%. The most critical issue was the recurrent trip of a sextupolar corrector circuit –...

  16. Size effect studies on smooth tensile specimens at room temperature and 400 {sup o}C

    Energy Technology Data Exchange (ETDEWEB)

    Krompholz, K.; Kamber, J.; Groth, E.; Kalkhof, D

    2000-06-15

    One of the objectives of the REVISA project (REactor Vessel Integrity in Severe Accidents) is to assess the size effect related to deformation and failure models as well as material data under quasistatic and dynamic conditions in homogeneous and non-homogeneous states of strain. For these investigations the reactor pressure vessel material 20 MnMoNi 55 was selected. It was subjected to a size effect study on smooth scaled tensile specimens of three sizes. Two strain rates (2*10{sup -5}/s and 10{sup -3}/s) and two temperatures (room temperature and 400 {sup o}C) were selected. The investigations are aimed at a support for a gradient plasticity approach to size effects. Test on the small specimens (diameters 3 and 9 mm) were performed at an electromechanical test machine, while the large specimens (diameter 30 mm) had to be tested at a servohydraulical closed loop test machine with a force capacity of 1000 kN.

  17. A statistical methodology to derive the scaling law for the H-mode power threshold using a large multi-machine database

    International Nuclear Information System (INIS)

    Murari, A.; Lupelli, I.; Gaudio, P.; Gelfusa, M.; Vega, J.

    2012-01-01

    In this paper, a refined set of statistical techniques is developed and then applied to the problem of deriving the scaling law for the threshold power to access the H-mode of confinement in tokamaks. This statistical methodology is applied to the 2010 version of the ITPA International Global Threshold Data Base v6b(IGDBTHv6b). To increase the engineering and operative relevance of the results, only macroscopic physical quantities, measured in the vast majority of experiments, have been considered as candidate variables in the models. Different principled methods, such as agglomerative hierarchical variables clustering, without assumption about the functional form of the scaling, and nonlinear regression, are implemented to select the best subset of candidate independent variables and to improve the regression model accuracy. Two independent model selection criteria, based on the classical (Akaike information criterion) and Bayesian formalism (Bayesian information criterion), are then used to identify the most efficient scaling law from candidate models. The results derived from the full multi-machine database confirm the results of previous analysis but emphasize the importance of shaping quantities, elongation and triangularity. On the other hand, the scaling laws for the different machines and at different currents are different from each other at the level of confidence well above 95%, suggesting caution in the use of the global scaling laws for both interpretation and extrapolation purposes. (paper)

  18. Is the number and size of scales in Liolaemus lizards driven by climate?

    Science.gov (United States)

    José Tulli, María; Cruz, Félix B

    2018-05-03

    Ectothermic vertebrates are sensitive to thermal fluctuations in the environments where they occur. To buffer these fluctuations, ectotherms use different strategies, including the integument, which is a barrier that minimizes temperature exchange between the inner body and the surrounding air. In lizards, this barrier is constituted by keratinized scales of variable size, shape and texture, and its main function is protection, water loss avoidance and thermoregulation. The size of scales in lizards has been proposed to vary in relation to climatic gradients; however, it has also been observed that in some groups of Iguanian lizards could be related to phylogeny. Thus, here, we studied the area and number of scales (dorsal and ventral) of 61 species of Liolaemus lizards distributed in a broad latitudinal and altitudinal gradient to determine the nature of the variation of the scales with climate, and found that the number and size of scales are related to climatic variables, such as temperature and geographical variables as altitude. The evolutionary process that better explained how these morphological variables evolved was the Ornstein-Uhlenbeck model. The number of scales seemed to be related to common ancestry, whereas dorsal and ventral scale areas seemed to vary as a consequence of ecological traits. In fact, the ventral area is less exposed to climate conditions such as ultraviolet radiation or wind and is thus under less pressure to change in response to alterations in external conditions. It is possible that scale ornamentation such as keels and granulosity may bring some more information in this regard. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  19. The square lattice Ising model on the rectangle II: finite-size scaling limit

    Science.gov (United States)

    Hucht, Alfred

    2017-06-01

    Based on the results published recently (Hucht 2017 J. Phys. A: Math. Theor. 50 065201), the universal finite-size contributions to the free energy of the square lattice Ising model on the L× M rectangle, with open boundary conditions in both directions, are calculated exactly in the finite-size scaling limit L, M\\to∞ , T\\to Tc , with fixed temperature scaling variable x\\propto(T/Tc-1)M and fixed aspect ratio ρ\\propto L/M . We derive exponentially fast converging series for the related Casimir potential and Casimir force scaling functions. At the critical point T=Tc we confirm predictions from conformal field theory (Cardy and Peschel 1988 Nucl. Phys. B 300 377, Kleban and Vassileva 1991 J. Phys. A: Math. Gen. 24 3407). The presence of corners and the related corner free energy has dramatic impact on the Casimir scaling functions and leads to a logarithmic divergence of the Casimir potential scaling function at criticality.

  20. Flow Characteristics and Sizing of Annular Seat Valves for Digital Displacement Machines

    DEFF Research Database (Denmark)

    Nørgård, Christian; Bech, Michael Møller; Andersen, Torben O.

    2018-01-01

    operating range. To achieve high machine efficiency, the valve flow losses and the required electrical power needed for valve switching should be low. The annular valve plunger geometry, of a valve prototype developed for digital displacement machines, is parametrized by three parameters: stroke length......This paper investigates the steady-state flow characteristics and power losses of annular seat valves for digital displacement machines. Annular seat valves are promising candidates for active check-valves used in digital displacement fluid power machinery which excels in efficiency in a broad...... a valve prototype. Using the simulated maps to estimate the flow power losses and a simple generic model to estimate the electric power losses, both during digital displacement operation, optimal designs of annular seat valves, with respect to valve power losses, are derived under several different...

  1. Scaling range sizes to threats for robust predictions of risks to biodiversity.

    Science.gov (United States)

    Keith, David A; Akçakaya, H Resit; Murray, Nicholas J

    2018-04-01

    Assessments of risk to biodiversity often rely on spatial distributions of species and ecosystems. Range-size metrics used extensively in these assessments, such as area of occupancy (AOO), are sensitive to measurement scale, prompting proposals to measure them at finer scales or at different scales based on the shape of the distribution or ecological characteristics of the biota. Despite its dominant role in red-list assessments for decades, appropriate spatial scales of AOO for predicting risks of species' extinction or ecosystem collapse remain untested and contentious. There are no quantitative evaluations of the scale-sensitivity of AOO as a predictor of risks, the relationship between optimal AOO scale and threat scale, or the effect of grid uncertainty. We used stochastic simulation models to explore risks to ecosystems and species with clustered, dispersed, and linear distribution patterns subject to regimes of threat events with different frequency and spatial extent. Area of occupancy was an accurate predictor of risk (0.81<|r|<0.98) and performed optimally when measured with grid cells 0.1-1.0 times the largest plausible area threatened by an event. Contrary to previous assertions, estimates of AOO at these relatively coarse scales were better predictors of risk than finer-scale estimates of AOO (e.g., when measurement cells are <1% of the area of the largest threat). The optimal scale depended on the spatial scales of threats more than the shape or size of biotic distributions. Although we found appreciable potential for grid-measurement errors, current IUCN guidelines for estimating AOO neutralize geometric uncertainty and incorporate effective scaling procedures for assessing risks posed by landscape-scale threats to species and ecosystems. © 2017 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.

  2. Scaling HEP to Web size with RESTful protocols: The frontier example

    International Nuclear Information System (INIS)

    Dykstra, Dave

    2011-01-01

    The World-Wide-Web has scaled to an enormous size. The largest single contributor to its scalability is the HTTP protocol, particularly when used in conformity to REST (REpresentational State Transfer) principles. High Energy Physics (HEP) computing also has to scale to an enormous size, so it makes sense to base much of it on RESTful protocols. Frontier, which reads databases with an HTTP-based RESTful protocol, has successfully scaled to deliver production detector conditions data from both the CMS and ATLAS LHC detectors to hundreds of thousands of computer cores worldwide. Frontier is also able to re-use a large amount of standard software that runs the Web: on the clients, caches, and servers. I discuss the specific ways in which HTTP and REST enable high scalability for Frontier. I also briefly discuss another protocol used in HEP computing that is HTTP-based and RESTful, and another protocol that could benefit from it. My goal is to encourage HEP protocol designers to consider HTTP and REST whenever the same information is needed in many places.

  3. Empirical evidence for multi-scaled controls on wildfire size distributions in California

    Science.gov (United States)

    Povak, N.; Hessburg, P. F., Sr.; Salter, R. B.

    2014-12-01

    Ecological theory asserts that regional wildfire size distributions are examples of self-organized critical (SOC) systems. Controls on SOC event-size distributions by virtue are purely endogenous to the system and include the (1) frequency and pattern of ignitions, (2) distribution and size of prior fires, and (3) lagged successional patterns after fires. However, recent work has shown that the largest wildfires often result from extreme climatic events, and that patterns of vegetation and topography may help constrain local fire spread, calling into question the SOC model's simplicity. Using an atlas of >12,000 California wildfires (1950-2012) and maximum likelihood estimation (MLE), we fit four different power-law models and broken-stick regressions to fire-size distributions across 16 Bailey's ecoregions. Comparisons among empirical fire size distributions across ecoregions indicated that most ecoregion's fire-size distributions were significantly different, suggesting that broad-scale top-down controls differed among ecoregions. One-parameter power-law models consistently fit a middle range of fire sizes (~100 to 10000 ha) across most ecoregions, but did not fit to larger and smaller fire sizes. We fit the same four power-law models to patch size distributions of aspect, slope, and curvature topographies and found that the power-law models fit to a similar middle range of topography patch sizes. These results suggested that empirical evidence may exist for topographic controls on fire sizes. To test this, we used neutral landscape modeling techniques to determine if observed fire edges corresponded with aspect breaks more often than expected by random. We found significant differences between the empirical and neutral models for some ecoregions, particularly within the middle range of fire sizes. Our results, combined with other recent work, suggest that controls on ecoregional fire size distributions are multi-scaled and likely are not purely SOC. California

  4. Energy-efficient electrical machines by new materials. Superconductivity in large electrical machines

    International Nuclear Information System (INIS)

    Frauenhofer, Joachim; Arndt, Tabea; Grundmann, Joern

    2013-01-01

    The implementation of superconducting materials in high-power electrical machines results in significant advantages regarding efficiency, size and dynamic behavior when compared to conventional machines. The application of HTS (high-temperature superconductors) in electrical machines allows significantly higher power densities to be achieved for synchronous machines. In order to gain experience with the new technology, Siemens carried out a series of development projects. A 400 kW model motor for the verification of a concept for the new technology was followed by a 4000 kV A generator as highspeed machine - as well as a low-speed 4000 kW propeller motor with high torque. The 4000 kVA generator is still employed to carry out long-term tests and to check components. Superconducting machines have significantly lower weight and envelope dimensions compared to conventional machines, and for this reason alone, they utilize resources better. At the same time, operating losses are slashed to about half and the efficiency increases. Beyond this, they set themselves apart as a result of their special features in operation, such as high overload capability, stiff alternating load behavior and low noise. HTS machines provide significant advantages where the reduction of footprint, weight and losses or the improved dynamic behavior results in significant improvements of the overall system. Propeller motors and generators,for ships, offshore plants, in wind turbine and hydroelectric plants and in large power stations are just some examples. HTS machines can therefore play a significant role when it comes to efficiently using resources and energy as well as reducing the CO 2 emissions.

  5. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras.

    Science.gov (United States)

    Quinn, Mark Kenneth; Spinosa, Emanuele; Roberts, David A

    2017-07-25

    Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.

  6. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras

    Directory of Open Access Journals (Sweden)

    Mark Kenneth Quinn

    2017-07-01

    Full Text Available Measurements of pressure-sensitive paint (PSP have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.

  7. Output Enhancement in the Transfer-Field Machine Using Rotor ...

    African Journals Online (AJOL)

    Output Enhancement in the Transfer-Field Machine Using Rotor Circuit Induced Currents. ... The output of a plain transfer-field machine would be much less than that of a conventional machine of comparable size and dimensions. The use of ... The same effects have their parallel for the asynchronous mode of operation.

  8. From damselflies to pterosaurs: how burst and sustainable flight performance scale with size.

    Science.gov (United States)

    Marden, J H

    1994-04-01

    Recent empirical data for short-burst lift and power production of flying animals indicate that mass-specific lift and power output scale independently (lift) or slightly positively (power) with increasing size. These results contradict previous theory, as well as simple observation, which argues for degradation of flight performance with increasing size. Here, empirical measures of lift and power during short-burst exertion are combined with empirically based estimates of maximum muscle power output in order to predict how burst and sustainable performance scale with body size. The resulting model is used to estimate performance of the largest extant flying birds and insects, along with the largest flying animals known from fossils. These estimates indicate that burst flight performance capacities of even the largest extinct fliers (estimated mass 250 kg) would allow takeoff from the ground; however, limitations on sustainable power output should constrain capacity for continuous flight at body sizes exceeding 0.003-1.0 kg, depending on relative wing length and flight muscle mass.

  9. Vascularity and grey-scale sonographic features of normal cervical lymph nodes: variations with nodal size

    International Nuclear Information System (INIS)

    Ying, Michael; Ahuja, Anil; Brook, Fiona; Metreweli, Constantine

    2001-01-01

    AIM: This study was undertaken to investigate variations in the vascularity and grey-scale sonographic features of cervical lymph nodes with their size. MATERIALS AND METHODS: High resolution grey-scale sonography and power Doppler sonography were performed in 1133 cervical nodes in 109 volunteers who had a sonographic examination of the neck. Standardized parameters were used in power Doppler sonography. RESULTS: About 90% of lymph nodes with a maximum transverse diameter greater than 5 mm showed vascularity and an echogenic hilus. Smaller nodes were less likely to show vascularity and an echogenic hilus. As the size of the lymph nodes increased, the intranodal blood flow velocity increased significantly (P 0.05). CONCLUSIONS: The findings provide a baseline for grey-scale and power Doppler sonography of normal cervical lymph nodes. Sonologists will find varying vascularity and grey-scale appearances when encountering nodes of different sizes. Ying, M. et al. (2001)

  10. The effect of intermediate stop and ball size in fabrication of recycled steel powder using ball milling from machining steel chips

    International Nuclear Information System (INIS)

    Fitri, M.W.M.; Shun, C.H.; Rizam, S.S.; Shamsul, J.B.

    2007-01-01

    A feasibility study for producing recycled steel powder from steel scrap by ball milling was carried out. Steel scrap from machining was used as a raw material and was milled using planetary ball milling. Three samples were prepared in order to study the effect of intermediate stop and ball size. Sample with intermediate stop during milling process showed finer particle size compared to the sample with continuous milling. Decrease in the temperature of the vial during the intermediate stop milling gives less ductile behaviour to the steel powder, which is then easily work-hardened and fragmented to fine powder. Mixed small and big size ball give the best production of recycled steel powder where it gives higher impact force to the scrap and accelerate the fragmentation of the steel scrap into powder. (author)

  11. Surface and subsurface cracks characteristics of single crystal SiC wafer in surface machining

    Energy Technology Data Exchange (ETDEWEB)

    Qiusheng, Y., E-mail: qsyan@gdut.edu.cn; Senkai, C., E-mail: senkite@sina.com; Jisheng, P., E-mail: panjisheng@gdut.edu.cn [School of Electromechanical Engineering, Guangdong University of Technology, Guangzhou, 510006 (China)

    2015-03-30

    Different machining processes were used in the single crystal SiC wafer machining. SEM was used to observe the surface morphology and a cross-sectional cleavages microscopy method was used for subsurface cracks detection. Surface and subsurface cracks characteristics of single crystal SiC wafer in abrasive machining were analysed. The results show that the surface and subsurface cracks system of single crystal SiC wafer in abrasive machining including radial crack, lateral crack and the median crack. In lapping process, material removal is dominated by brittle removal. Lots of chipping pits were found on the lapping surface. With the particle size becomes smaller, the surface roughness and subsurface crack depth decreases. When the particle size was changed to 1.5µm, the surface roughness Ra was reduced to 24.0nm and the maximum subsurface crack was 1.2µm. The efficiency of grinding is higher than lapping. Plastic removal can be achieved by changing the process parameters. Material removal was mostly in brittle fracture when grinding with 325# diamond wheel. Plow scratches and chipping pits were found on the ground surface. The surface roughness Ra was 17.7nm and maximum subsurface crack depth was 5.8 µm. When grinding with 8000# diamond wheel, the material removal was in plastic flow. Plastic scratches were found on the surface. A smooth surface of roughness Ra 2.5nm without any subsurface cracks was obtained. Atomic scale removal was possible in cluster magnetorheological finishing with diamond abrasive size of 0.5 µm. A super smooth surface eventually obtained with a roughness of Ra 0.4nm without any subsurface crack.

  12. Scaling of lifting forces in relation to object size in whole body lifting

    NARCIS (Netherlands)

    Kingma, I.; van Dieen, J.H.; Toussaint, H.M.

    2005-01-01

    Subjects prepare for a whole body lifting movement by adjusting their posture and scaling their lifting forces to the expected object weight. The expectancy is based on visual and haptic size cues. This study aimed to find out whether lifting force overshoots related to object size cues disappear or

  13. Machine learning for adaptive many-core machines a practical approach

    CERN Document Server

    Lopes, Noel

    2015-01-01

    The overwhelming data produced everyday and the increasing performance and cost requirements of applications?are transversal to a wide range of activities in society, from science to industry. In particular, the magnitude and complexity of the tasks that Machine Learning (ML) algorithms have to solve are driving the need to devise adaptive many-core machines that scale well with the volume of data, or in other words, can handle Big Data.This book gives a concise view on how to extend the applicability of well-known ML algorithms in Graphics Processing Unit (GPU) with data scalability in mind.

  14. Finite-size scaling of clique percolation on two-dimensional Moore lattices

    Science.gov (United States)

    Dong, Jia-Qi; Shen, Zhou; Zhang, Yongwen; Huang, Zi-Gang; Huang, Liang; Chen, Xiaosong

    2018-05-01

    Clique percolation has attracted much attention due to its significance in understanding topological overlap among communities and dynamical instability of structured systems. Rich critical behavior has been observed in clique percolation on Erdős-Rényi (ER) random graphs, but few works have discussed clique percolation on finite dimensional systems. In this paper, we have defined a series of characteristic events, i.e., the historically largest size jumps of the clusters, in the percolating process of adding bonds and developed a new finite-size scaling scheme based on the interval of the characteristic events. Through the finite-size scaling analysis, we have found, interestingly, that, in contrast to the clique percolation on an ER graph where the critical exponents are parameter dependent, the two-dimensional (2D) clique percolation simply shares the same critical exponents with traditional site or bond percolation, independent of the clique percolation parameters. This has been corroborated by bridging two special types of clique percolation to site percolation on 2D lattices. Mechanisms for the difference of the critical behaviors between clique percolation on ER graphs and on 2D lattices are also discussed.

  15. FR-type radio sources in COSMOS: relation of radio structure to size, accretion modes and large-scale environment

    Science.gov (United States)

    Vardoulaki, Eleni; Faustino Jimenez Andrade, Eric; Delvecchio, Ivan; Karim, Alexander; Smolčić, Vernesa; Magnelli, Benjamin; Bertoldi, Frank; Schinnener, Eva; Sargent, Mark; Finoguenov, Alexis; VLA COSMOS Team

    2018-01-01

    The radio sources associated with active galactic nuclei (AGN) can exhibit a variety of radio structures, from simple to more complex, giving rise to a variety of classification schemes. The question which still remains open, given deeper surveys revealing new populations of radio sources, is whether this plethora of radio structures can be attributed to the physical properties of the host or to the environment. Here we present an analysis on the radio structure of radio-selected AGN from the VLA-COSMOS Large Project at 3 GHz (JVLA-COSMOS; Smolčić et al.) in relation to: 1) their linear projected size, 2) the Eddington ratio, and 3) the environment their hosts lie within. We classify these as FRI (jet-like) and FRII (lobe-like) based on the FR-type classification scheme, and compare them to a sample of jet-less radio AGN in JVLA-COSMOS. We measure their linear projected sizes using a semi-automatic machine learning technique. Their Eddington ratios are calculated from X-ray data available for COSMOS. As environmental probes we take the X-ray groups (hundreds kpc) and the density fields (~Mpc-scale) in COSMOS. We find that FRII radio sources are on average larger than FRIs, which agrees with literature. But contrary to past studies, we find no dichotomy in FR objects in JVLA-COSMOS given their Eddington ratios, as on average they exhibit similar values. Furthermore our results show that the large-scale environment does not explain the observed dichotomy in lobe- and jet-like FR-type objects as both types are found on similar environments, but it does affect the shape of the radio structure introducing bents for objects closer to the centre of an X-ray group.

  16. Visual Thing Recognition with Binary Scale-Invariant Feature Transform and Support Vector Machine Classifiers Using Color Information

    OpenAIRE

    Wei-Jong Yang; Wei-Hau Du; Pau-Choo Chang; Jar-Ferr Yang; Pi-Hsia Hung

    2017-01-01

    The demands of smart visual thing recognition in various devices have been increased rapidly for daily smart production, living and learning systems in recent years. This paper proposed a visual thing recognition system, which combines binary scale-invariant feature transform (SIFT), bag of words model (BoW), and support vector machine (SVM) by using color information. Since the traditional SIFT features and SVM classifiers only use the gray information, color information is still an importan...

  17. Theory of critical phenomena in finite-size systems scaling and quantum effects

    CERN Document Server

    Brankov, Jordan G; Tonchev, Nicholai S

    2000-01-01

    The aim of this book is to familiarise the reader with the rich collection of ideas, methods and results available in the theory of critical phenomena in systems with confined geometry. The existence of universal features of the finite-size effects arising due to highly correlated classical or quantum fluctuations is explained by the finite-size scaling theory. This theory (1) offers an interpretation of experimental results on finite-size effects in real systems; (2) gives the most reliable tool for extrapolation to the thermodynamic limit of data obtained by computer simulations; (3) reveals

  18. A Fast SVD-Hidden-nodes based Extreme Learning Machine for Large-Scale Data Analytics.

    Science.gov (United States)

    Deng, Wan-Yu; Bai, Zuo; Huang, Guang-Bin; Zheng, Qing-Hua

    2016-05-01

    Big dimensional data is a growing trend that is emerging in many real world contexts, extending from web mining, gene expression analysis, protein-protein interaction to high-frequency financial data. Nowadays, there is a growing consensus that the increasing dimensionality poses impeding effects on the performances of classifiers, which is termed as the "peaking phenomenon" in the field of machine intelligence. To address the issue, dimensionality reduction is commonly employed as a preprocessing step on the Big dimensional data before building the classifiers. In this paper, we propose an Extreme Learning Machine (ELM) approach for large-scale data analytic. In contrast to existing approaches, we embed hidden nodes that are designed using singular value decomposition (SVD) into the classical ELM. These SVD nodes in the hidden layer are shown to capture the underlying characteristics of the Big dimensional data well, exhibiting excellent generalization performances. The drawback of using SVD on the entire dataset, however, is the high computational complexity involved. To address this, a fast divide and conquer approximation scheme is introduced to maintain computational tractability on high volume data. The resultant algorithm proposed is labeled here as Fast Singular Value Decomposition-Hidden-nodes based Extreme Learning Machine or FSVD-H-ELM in short. In FSVD-H-ELM, instead of identifying the SVD hidden nodes directly from the entire dataset, SVD hidden nodes are derived from multiple random subsets of data sampled from the original dataset. Comprehensive experiments and comparisons are conducted to assess the FSVD-H-ELM against other state-of-the-art algorithms. The results obtained demonstrated the superior generalization performance and efficiency of the FSVD-H-ELM. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Virtual Vector Machine for Bayesian Online Classification

    OpenAIRE

    Minka, Thomas P.; Xiang, Rongjing; Yuan; Qi

    2012-01-01

    In a typical online learning scenario, a learner is required to process a large data stream using a small memory buffer. Such a requirement is usually in conflict with a learner's primary pursuit of prediction accuracy. To address this dilemma, we introduce a novel Bayesian online classi cation algorithm, called the Virtual Vector Machine. The virtual vector machine allows you to smoothly trade-off prediction accuracy with memory size. The virtual vector machine summarizes the information con...

  20. Power Electronics and Electric Machines | Transportation Research | NREL

    Science.gov (United States)

    Power Electronics and Electric Machines NREL's power electronics and electric machines research helping boost the performance of power electronics components and systems, while driving down size, weight technical barriers to EDV commercialization. EDVs rely heavily on power electronics to distribute the proper

  1. A machine learning approach to the accurate prediction of monitor units for a compact proton machine.

    Science.gov (United States)

    Sun, Baozhou; Lam, Dao; Yang, Deshan; Grantham, Kevin; Zhang, Tiezhi; Mutic, Sasa; Zhao, Tianyu

    2018-05-01

    Clinical treatment planning systems for proton therapy currently do not calculate monitor units (MUs) in passive scatter proton therapy due to the complexity of the beam delivery systems. Physical phantom measurements are commonly employed to determine the field-specific output factors (OFs) but are often subject to limited machine time, measurement uncertainties and intensive labor. In this study, a machine learning-based approach was developed to predict output (cGy/MU) and derive MUs, incorporating the dependencies on gantry angle and field size for a single-room proton therapy system. The goal of this study was to develop a secondary check tool for OF measurements and eventually eliminate patient-specific OF measurements. The OFs of 1754 fields previously measured in a water phantom with calibrated ionization chambers and electrometers for patient-specific fields with various range and modulation width combinations for 23 options were included in this study. The training data sets for machine learning models in three different methods (Random Forest, XGBoost and Cubist) included 1431 (~81%) OFs. Ten-fold cross-validation was used to prevent "overfitting" and to validate each model. The remaining 323 (~19%) OFs were used to test the trained models. The difference between the measured and predicted values from machine learning models was analyzed. Model prediction accuracy was also compared with that of the semi-empirical model developed by Kooy (Phys. Med. Biol. 50, 2005). Additionally, gantry angle dependence of OFs was measured for three groups of options categorized on the selection of the second scatters. Field size dependence of OFs was investigated for the measurements with and without patient-specific apertures. All three machine learning methods showed higher accuracy than the semi-empirical model which shows considerably large discrepancy of up to 7.7% for the treatment fields with full range and full modulation width. The Cubist-based solution

  2. Vertebral scale system to measure canine heart size in radiographs

    International Nuclear Information System (INIS)

    Buchanan, J.W.; Bucheler, J.

    1995-01-01

    A method for measuring canine heart size in radiographs was developed on the basis that there is a good correlation between heart size and body length regardless of the conformation of the thorax. The lengths of the long and short axes of the heart of 100 clinically normal dogs were determined with calipers, and the dimensions were scaled against the length of vertebrae dorsal to the heart beginning with T4. The sum of the long and short axes of the heart expressed as vertebral heart size was 9.7 +/- 0.5 vertebrae. The differences between dogs with a wide or deep thorax, males and females, and right or left lateral recumbency were not significant. The caudal vena cava was 0.75 vertebrae +/- 0.13 in comparison to the length of the vertebra over the tracheal bifurcation

  3. Machine learning enhanced optical distance sensor

    Science.gov (United States)

    Amin, M. Junaid; Riza, N. A.

    2018-01-01

    Presented for the first time is a machine learning enhanced optical distance sensor. The distance sensor is based on our previously demonstrated distance measurement technique that uses an Electronically Controlled Variable Focus Lens (ECVFL) with a laser source to illuminate a target plane with a controlled optical beam spot. This spot with varying spot sizes is viewed by an off-axis camera and the spot size data is processed to compute the distance. In particular, proposed and demonstrated in this paper is the use of a regularized polynomial regression based supervised machine learning algorithm to enhance the accuracy of the operational sensor. The algorithm uses the acquired features and corresponding labels that are the actual target distance values to train a machine learning model. The optimized training model is trained over a 1000 mm (or 1 m) experimental target distance range. Using the machine learning algorithm produces a training set and testing set distance measurement errors of learning. Applications for the proposed sensor include industrial scenario distance sensing where target material specific training models can be generated to realize low <1% measurement error distance measurements.

  4. Biofuel manufacturing from woody biomass: effects of sieve size used in biomass size reduction.

    Science.gov (United States)

    Zhang, Meng; Song, Xiaoxu; Deines, T W; Pei, Z J; Wang, Donghai

    2012-01-01

    Size reduction is the first step for manufacturing biofuels from woody biomass. It is usually performed using milling machines and the particle size is controlled by the size of the sieve installed on a milling machine. There are reported studies about the effects of sieve size on energy consumption in milling of woody biomass. These studies show that energy consumption increased dramatically as sieve size became smaller. However, in these studies, the sugar yield (proportional to biofuel yield) in hydrolysis of the milled woody biomass was not measured. The lack of comprehensive studies about the effects of sieve size on energy consumption in biomass milling and sugar yield in hydrolysis process makes it difficult to decide which sieve size should be selected in order to minimize the energy consumption in size reduction and maximize the sugar yield in hydrolysis. The purpose of this paper is to fill this gap in the literature. In this paper, knife milling of poplar wood was conducted using sieves of three sizes (1, 2, and 4 mm). Results show that, as sieve size increased, energy consumption in knife milling decreased and sugar yield in hydrolysis increased in the tested range of particle sizes.

  5. Challenges for coexistence of machine to machine and human to human applications in mobile network

    DEFF Research Database (Denmark)

    Sanyal, R.; Cianca, E.; Prasad, Ramjee

    2012-01-01

    A key factor for the evolution of the mobile networks towards 4G is to bring to fruition high bandwidth per mobile node. Eventually, due to the advent of a new class of applications, namely, Machine-to-Machine, we foresee new challenges where bandwidth per user is no more the primal driver...... be evolved to address various nuances of the mobile devices used by man and machines. The bigger question is as follows. Is the state-of-the-art mobile network designed optimally to cater both the Human-to-Human and Machine-to-Machine applications? This paper presents the primary challenges....... As an immediate impact of the high penetration of M2M devices, we envisage a surge in the signaling messages for mobility and location management. The cell size will shrivel due to high tele-density resulting in even more signaling messages related to handoff and location updates. The mobile network should...

  6. Studying time of flight imaging through scattering media across multiple size scales (Conference Presentation)

    Science.gov (United States)

    Velten, Andreas

    2017-05-01

    Light scattering is a primary obstacle to optical imaging in a variety of different environments and across many size and time scales. Scattering complicates imaging on large scales when imaging through the atmosphere when imaging from airborne or space borne platforms, through marine fog, or through fog and dust in vehicle navigation, for example in self driving cars. On smaller scales, scattering is the major obstacle when imaging through human tissue in biomedical applications. Despite the large variety of participating materials and size scales, light transport in all these environments is usually described with very similar scattering models that are defined by the same small set of parameters, including scattering and absorption length and phase function. We attempt a study of scattering and methods of imaging through scattering across different scales and media, particularly with respect to the use of time of flight information. We can show that using time of flight, in addition to spatial information, provides distinct advantages in scattering environments. By performing a comparative study of scattering across scales and media, we are able to suggest scale models for scattering environments to aid lab research. We also can transfer knowledge and methodology between different fields.

  7. Finite-size scaling for quantum chains with an oscillatory energy gap

    International Nuclear Information System (INIS)

    Hoeger, C.; Gehlen, G. von; Rittenberg, V.

    1984-07-01

    We show that the existence of zeroes of the energy gap for finite quantum chains is related to a nonvanishing wavevector. Finite-size scaling ansaetze are formulated for incommensurable and oscillatory structures. The ansaetze are verified in the one-dimensional XY model in a transverse field. (orig.)

  8. Flow Characteristics and Sizing of Annular Seat Valves for Digital Displacement Machines

    Directory of Open Access Journals (Sweden)

    Christian Nørgård

    2018-01-01

    Full Text Available This paper investigates the steady-state flow characteristics and power losses of annular seat valves for digital displacement machines. Annular seat valves are promising candidates for active check-valves used in digital displacement fluid power machinery which excels in efficiency in a broad operating range. To achieve high machine efficiency, the valve flow losses and the required electrical power needed for valve switching should be low. The annular valve plunger geometry, of a valve prototype developed for digital displacement machines, is parametrized by three parameters: stroke length, seat radius and seat width. The steady-state flow characteristics are analyzed using static axi-symmetric computational fluid dynamics. The pressure drops and flow forces are mapped in the valve design space for several different flow rates. The simulated results are compared against measurements using a valve prototype. Using the simulated maps to estimate the flow power losses and a simple generic model to estimate the electric power losses, both during digital displacement operation, optimal designs of annular seat valves, with respect to valve power losses, are derived under several different operating conditions.

  9. FRICTION - WELDING MACHINE AUTOMATIC CONTROL CIRCUIT DESIGN AND APPLICATION

    OpenAIRE

    Hakan ATEŞ; Ramazan BAYINDIR

    2003-01-01

    In this work, automatic controllability of a laboratory-sized friction-welding machine has been investigated. The laboratory-sized friction-welding machine was composed of motor, brake, rotary and constant samples late pliers, and hydraulic unit. In automatic method, welding parameters such as friction time, friction pressure, forge time and forge pressure can be applied sensitively using time relays and contactors. At the end of the experimental study it's observed that automatic control sys...

  10. Settlement-Size Scaling among Prehistoric Hunter-Gatherer Settlement Systems in the New World.

    Directory of Open Access Journals (Sweden)

    W Randall Haas

    Full Text Available Settlement size predicts extreme variation in the rates and magnitudes of many social and ecological processes in human societies. Yet, the factors that drive human settlement-size variation remain poorly understood. Size variation among economically integrated settlements tends to be heavy tailed such that the smallest settlements are extremely common and the largest settlements extremely large and rare. The upper tail of this size distribution is often formalized mathematically as a power-law function. Explanations for this scaling structure in human settlement systems tend to emphasize complex socioeconomic processes including agriculture, manufacturing, and warfare-behaviors that tend to differentially nucleate and disperse populations hierarchically among settlements. But, the degree to which heavy-tailed settlement-size variation requires such complex behaviors remains unclear. By examining the settlement patterns of eight prehistoric New World hunter-gatherer settlement systems spanning three distinct environmental contexts, this analysis explores the degree to which heavy-tailed settlement-size scaling depends on the aforementioned socioeconomic complexities. Surprisingly, the analysis finds that power-law models offer plausible and parsimonious statistical descriptions of prehistoric hunter-gatherer settlement-size variation. This finding reveals that incipient forms of hierarchical settlement structure may have preceded socioeconomic complexity in human societies and points to a need for additional research to explicate how mobile foragers came to exhibit settlement patterns that are more commonly associated with hierarchical organization. We propose that hunter-gatherer mobility with preferential attachment to previously occupied locations may account for the observed structure in site-size variation.

  11. Investigation of permanent magnet machines for downhole applications: Design, prototype and testing of a flux-switching permanent magnet machine

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Anyuan

    2011-01-15

    The current standard electrical downhole machine is the induction machine which is relatively inefficient. Permanent magnet (PM) machines, having higher efficiencies, higher torque densities and smaller volumes, have widely employed in industrial applications to replace conventional machines, but few have been developed for downhole applications due to the high ambient temperatures in deep wells and the low temperature stability of PM materials over time. Today, with the development of variable speed drives and the applications of high temperature magnet materials, it is increasingly interesting for oil and gas industries to develop PM machines for downhole applications. Recently, some PM machines applications have been presented for downhole applications, which are normally addressed on certain specific downhole case. In this thesis the focus has been put on the performance investigation of different PM machines for general downhole cases, in which the machine outer diameter is limited to be small by well size, while the machine axial length may be relatively long. The machine reliability is the most critical requirement while high torque density and high efficiency are also desirable. The purpose is to understand how the special constraints in downhole condition affect the performances of different machines. First of all, three basic machine concepts, which are the radial, axial and transverse flux machines, are studied in details by analytical method. Their torque density, efficiency, power factor and power capability are investigated with respect to the machine axial length and pole number. The presented critical performance comparisons of the machines provide an indication of machines best suitable with respect to performance and size for downhole applications. Conventional radial flux permanent magnet (RFPM) machines with the PMs on the rotor can provide high torque density and high efficiency. This type of machine has been suggested for several different

  12. The smallest possible thermal machines and the foundations of thermodynamics

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    In my talk I raise the question on the fundamental limits to the size of thermal machines – refrigerators, heat pumps and work producing engines - and I will present the smallest possible ones. I will then discuss the issue of a possible complementarity between size and efficiency and show that even the smallest machines could be maximally efficient and I will also present a new point of view over what is work and what do thermal machines actually do. Finally I will present a completely new approach to the foundations of thermodynamics that follows from these results.

  13. On the use of Cloud Computing and Machine Learning for Large-Scale SAR Science Data Processing and Quality Assessment Analysi

    Science.gov (United States)

    Hua, H.

    2016-12-01

    Geodetic imaging is revolutionizing geophysics, but the scope of discovery has been limited by labor-intensive technological implementation of the analyses. The Advanced Rapid Imaging and Analysis (ARIA) project has proven capability to automate SAR data processing and analysis. Existing and upcoming SAR missions such as Sentinel-1A/B and NISAR are also expected to generate massive amounts of SAR data. This has brought to the forefront the need for analytical tools for SAR quality assessment (QA) on the large volumes of SAR data-a critical step before higher-level time series and velocity products can be reliably generated. Initially leveraging an advanced hybrid-cloud computing science data system for performing large-scale processing, machine learning approaches were augmented for automated analysis of various quality metrics. Machine learning-based user-training of features, cross-validation, prediction models were integrated into our cloud-based science data processing flow to enable large-scale and high-throughput QA analytics for enabling improvements to the production quality of geodetic data products.

  14. Fabrication of Superhydrophobic Metallic Surface by Wire Electrical Discharge Machining for Seamless Roll-to-Roll Printing

    Directory of Open Access Journals (Sweden)

    Jin-Young So

    2018-04-01

    Full Text Available This paper presents a proposal of a direct one-step method to fabricate a multi-scale superhydrophobic metallic seamless roll mold. The mold was fabricated using the wire electrical discharge machining (WEDM technique for a roll-to-roll imprinting application to produce a large superhydrophobic surface. Taking advantage of the exfoliating characteristic of the metallic surface, nano-sized surface roughness was spontaneously formed while manufacturing the micro-sized structure: that is, a dual-scale hierarchical structure was easily produced in a simple one-step fabrication with a large area on the aluminum metal surface. This hierarchical structure showed superhydrophobicity without chemical coating. A roll-type seamless mold for the roll-to-roll process was fabricated through engraving the patterns on the cylindrical substrate, thereby enabling to make a continuous film with superhydrophobicity.

  15. Turbulent Concentration of MM-Size Particles in the Protoplanetary Nebula: Scaled-Dependent Multiplier Functions

    Science.gov (United States)

    Cuzzi, Jeffrey N.; Hartlep, Thomas; Weston, B.; Estremera, Shariff Kareem

    2014-01-01

    The initial accretion of primitive bodies (asteroids and TNOs) from freely-floating nebula particles remains problematic. Here we focus on the asteroids where constituent particle (read "chondrule") sizes are observationally known; similar arguments will hold for TNOs, but the constituent particles in those regions will be smaller, or will be fluffy aggregates, and are unobserved. Traditional growth-bysticking models encounter a formidable "meter-size barrier" [1] (or even a mm-cm-size barrier [2]) in turbulent nebulae, while nonturbulent nebulae form large asteroids too quickly to explain long spreads in formation times, or the dearth of melted asteroids [3]. Even if growth by sticking could somehow breach the meter size barrier, other obstacles are encountered through the 1-10km size range [4]. Another clue regarding planetesimal formation is an apparent 100km diameter peak in the pre-depletion, pre-erosion mass distribution of asteroids [5]; scenarios leading directly from independent nebula particulates to this size, which avoid the problematic m-km size range, could be called "leapfrog" scenarios [6-8]. The leapfrog scenario we have studied in detail involves formation of dense clumps of aerodynamically selected, typically mm-size particles in turbulence, which can under certain conditions shrink inexorably on 100-1000 orbit timescales and form 10-100km diameter sandpile planetesimals. The typical sizes of planetesimals and the rate of their formation [7,8] are determined by a statistical model with properties inferred from large numerical simulations of turbulence [9]. Nebula turbulence can be described by its Reynolds number Re = L/eta sup(4/3), where L = ETA alpha sup (1/2) the largest eddy scale, H is the nebula gas vertical scale height, and a the nebula turbulent viscosity parameter, and ? is the Kolmogorov or smallest scale in turbulence (typically about 1km), with eddy turnover time t?. In the nebula, Re is far larger than any numerical simulation can

  16. Positional dependence of scale size and shape in butterfly wings: wing-wide phenotypic coordination of color-pattern elements and background.

    Science.gov (United States)

    Kusaba, Kiseki; Otaki, Joji M

    2009-02-01

    Butterfly wing color-patterns are a phenotypically coordinated array of scales whose color is determined as cellular interpretation outputs for morphogenic signals. Here we investigated distribution patterns of scale shape and size in relation to position and coloration on the hindwings of a nymphalid butterfly Junonia orithya. Most scales had a smooth edge but scales at and near the natural and ectopic eyespot foci and in the postbasal area were jagged. Scale size decreased regularly from the postbasal to distal areas, and eyespots occasionally had larger scales than the background. Reasonable correlations were obtained between the eyespot size and focal scale size in females. Histological and real-time individual observations of the color-pattern developmental sequence showed that the background brown and blue colors expanded from the postbasal to distal areas independently from the color-pattern elements such as eyespots. These data suggest that morphogenic signals for coloration directly or indirectly influence the scale shape and size and that the blue "background" is organized by a long-range signal from an unidentified organizing center in J. orithya.

  17. Observing invisible machines with invisible light: The mechanics of molecular machines

    NARCIS (Netherlands)

    Panman, M.R.

    2013-01-01

    Over the past few decades, chemists have designed and constructed a large variety of artificial molecular machines. Understanding of the fundamental principles behind motion at the molecular scale is key to the development of such devices. Motion at the molecular level is very different from that

  18. Vending machine assessment methodology. A systematic review.

    Science.gov (United States)

    Matthews, Melissa A; Horacek, Tanya M

    2015-07-01

    The nutritional quality of food and beverage products sold in vending machines has been implicated as a contributing factor to the development of an obesogenic food environment. How comprehensive, reliable, and valid are the current assessment tools for vending machines to support or refute these claims? A systematic review was conducted to summarize, compare, and evaluate the current methodologies and available tools for vending machine assessment. A total of 24 relevant research studies published between 1981 and 2013 met inclusion criteria for this review. The methodological variables reviewed in this study include assessment tool type, study location, machine accessibility, product availability, healthfulness criteria, portion size, price, product promotion, and quality of scientific practice. There were wide variations in the depth of the assessment methodologies and product healthfulness criteria utilized among the reviewed studies. Of the reviewed studies, 39% evaluated machine accessibility, 91% evaluated product availability, 96% established healthfulness criteria, 70% evaluated portion size, 48% evaluated price, 52% evaluated product promotion, and 22% evaluated the quality of scientific practice. Of all reviewed articles, 87% reached conclusions that provided insight into the healthfulness of vended products and/or vending environment. Product healthfulness criteria and complexity for snack and beverage products was also found to be variable between the reviewed studies. These findings make it difficult to compare results between studies. A universal, valid, and reliable vending machine assessment tool that is comprehensive yet user-friendly is recommended. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Coupling machine learning with mechanistic models to study runoff production and river flow at the hillslope scale

    Science.gov (United States)

    Marçais, J.; Gupta, H. V.; De Dreuzy, J. R.; Troch, P. A. A.

    2016-12-01

    Geomorphological structure and geological heterogeneity of hillslopes are major controls on runoff responses. The diversity of hillslopes (morphological shapes and geological structures) on one hand, and the highly non linear runoff mechanism response on the other hand, make it difficult to transpose what has been learnt at one specific hillslope to another. Therefore, making reliable predictions on runoff appearance or river flow for a given hillslope is a challenge. Applying a classic model calibration (based on inverse problems technique) requires doing it for each specific hillslope and having some data available for calibration. When applied to thousands of cases it cannot always be promoted. Here we propose a novel modeling framework based on coupling process based models with data based approach. First we develop a mechanistic model, based on hillslope storage Boussinesq equations (Troch et al. 2003), able to model non linear runoff responses to rainfall at the hillslope scale. Second we set up a model database, representing thousands of non calibrated simulations. These simulations investigate different hillslope shapes (real ones obtained by analyzing 5m digital elevation model of Brittany and synthetic ones), different hillslope geological structures (i.e. different parametrizations) and different hydrologic forcing terms (i.e. different infiltration chronicles). Then, we use this model library to train a machine learning model on this physically based database. Machine learning model performance is then assessed by a classic validating phase (testing it on new hillslopes and comparing machine learning with mechanistic outputs). Finally we use this machine learning model to learn what are the hillslope properties controlling runoffs. This methodology will be further tested combining synthetic datasets with real ones.

  20. Anisotropic modulus stabilisation. Strings at LHC scales with micron-sized extra dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Cicoli, M. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Burgess, C.P. [McMaster Univ., Hamilton (Canada). Dept. of Physics and Astronomy; Perimeter Institute for Theoretical Physics, Waterloo (Canada); Quevedo, F. [Cambridge Univ. (United Kingdom). DAMTP/CMS; Abdus Salam International Centre for Theoretical Physics, Trieste (Italy)

    2011-04-15

    We construct flux-stabilised Type IIB string compactifications whose extra dimensions have very different sizes, and use these to describe several types of vacua with a TeV string scale. Because we can access regimes where two dimensions are hierarchically larger than the other four, we find examples where two dimensions are micron-sized while the other four are at the weak scale in addition to more standard examples with all six extra dimensions equally large. Besides providing ultraviolet completeness, the phenomenology of these models is richer than vanilla large-dimensional models in several generic ways: (i) they are supersymmetric, with supersymmetry broken at sub-eV scales in the bulk but only nonlinearly realised in the Standard Model sector, leading to no MSSM superpartners for ordinary particles and many more bulk missing-energy channels, as in supersymmetric large extra dimensions (SLED); (ii) small cycles in the more complicated extra-dimensional geometry allow some KK states to reside at TeV scales even if all six extra dimensions are nominally much larger; (iii) a rich spectrum of string and KK states at TeV scales; and (iv) an equally rich spectrum of very light moduli exist having unusually small (but technically natural) masses, with potentially interesting implications for cosmology and astrophysics that nonetheless evade new-force constraints. The hierarchy problem is solved in these models because the extra-dimensional volume is naturally stabilised at exponentially large values: the extra dimensions are Calabi-Yau geometries with a 4D K3-fibration over a 2D base, with moduli stabilised within the well-established LARGE-Volume scenario. The new technical step is the use of poly-instanton corrections to the superpotential (which, unlike for simpler models, are present on K3-fibered Calabi-Yau compactifications) to obtain a large hierarchy between the sizes of different dimensions. For several scenarios we identify the low-energy spectrum and

  1. Anisotropic modulus stabilisation. Strings at LHC scales with micron-sized extra dimensions

    International Nuclear Information System (INIS)

    Cicoli, M.; Burgess, C.P.; Quevedo, F.

    2011-04-01

    We construct flux-stabilised Type IIB string compactifications whose extra dimensions have very different sizes, and use these to describe several types of vacua with a TeV string scale. Because we can access regimes where two dimensions are hierarchically larger than the other four, we find examples where two dimensions are micron-sized while the other four are at the weak scale in addition to more standard examples with all six extra dimensions equally large. Besides providing ultraviolet completeness, the phenomenology of these models is richer than vanilla large-dimensional models in several generic ways: (i) they are supersymmetric, with supersymmetry broken at sub-eV scales in the bulk but only nonlinearly realised in the Standard Model sector, leading to no MSSM superpartners for ordinary particles and many more bulk missing-energy channels, as in supersymmetric large extra dimensions (SLED); (ii) small cycles in the more complicated extra-dimensional geometry allow some KK states to reside at TeV scales even if all six extra dimensions are nominally much larger; (iii) a rich spectrum of string and KK states at TeV scales; and (iv) an equally rich spectrum of very light moduli exist having unusually small (but technically natural) masses, with potentially interesting implications for cosmology and astrophysics that nonetheless evade new-force constraints. The hierarchy problem is solved in these models because the extra-dimensional volume is naturally stabilised at exponentially large values: the extra dimensions are Calabi-Yau geometries with a 4D K3-fibration over a 2D base, with moduli stabilised within the well-established LARGE-Volume scenario. The new technical step is the use of poly-instanton corrections to the superpotential (which, unlike for simpler models, are present on K3-fibered Calabi-Yau compactifications) to obtain a large hierarchy between the sizes of different dimensions. For several scenarios we identify the low-energy spectrum and

  2. Annotated bibliography on the impacts of size and scale of silvopasture in the Southeastern U.S.A

    Science.gov (United States)

    Gregory E. Frey; Marcus M. Comer

    2018-01-01

    Silvopasture, the integration of trees and pasture for livestock, has numerous potential benefits for producers. However, size or scale of the operation may affect those benefits. A review of relevant research on the scale and size economies of silvopasture, general forestry, and livestock agriculture was undertaken to better understand potential silvopasture...

  3. Verification of Gyrokinetic Particle of Turbulent Simulation of Device Size Scaling Transport

    Institute of Scientific and Technical Information of China (English)

    LIN Zhihong; S. ETHIER; T. S. HAHM; W. M. TANG

    2012-01-01

    Verification and historical perspective are presented on the gyrokinetic particle simulations that discovered the device size scaling of turbulent transport and indentified the geometry model as the source of the long-standing disagreement between gyrokinetic particle and continuum simulations.

  4. Multi products single machine economic production quantity model with multiple batch size

    Directory of Open Access Journals (Sweden)

    Ata Allah Taleizadeh

    2011-04-01

    Full Text Available In this paper, a multi products single machine economic order quantity model with discrete delivery is developed. A unique cycle length is considered for all produced items with an assumption that all products are manufactured on a single machine with a limited capacity. The proposed model considers different items such as production, setup, holding, and transportation costs. The resulted model is formulated as a mixed integer nonlinear programming model. Harmony search algorithm, extended cutting plane and particle swarm optimization methods are used to solve the proposed model. Two numerical examples are used to analyze and to evaluate the performance of the proposed model.

  5. A Review of Machine Learning and Data Mining Approaches for Business Applications in Social Networks

    OpenAIRE

    Evis Trandafili; Marenglen Biba

    2013-01-01

    Social networks have an outstanding marketing value and developing data mining methods for viral marketing is a hot topic in the research community. However, most social networks remain impossible to be fully analyzed and understood due to prohibiting sizes and the incapability of traditional machine learning and data mining approaches to deal with the new dimension in the learning process related to the large-scale environment where the data are produced. On one hand, the birth and evolution...

  6. Many ways to be small: different environmental regulators of size generate distinct scaling relationships in Drosophila melanogaster

    OpenAIRE

    Shingleton, Alexander W.; Estep, Chad M.; Driscoll, Michael V.; Dworkin, Ian

    2009-01-01

    Static allometries, the scaling relationship between body and trait size, describe the shape of animals in a population or species, and are generated in response to variation in genetic or environmental regulators of size. In principle, allometries may vary with the different size regulators that generate them, which can be problematic since allometric differences are also used to infer patterns of selection on morphology. We test this hypothesis by examining the patterns of scaling in Drosop...

  7. Synchronous machines. General principles and structures; Machines synchrones. Principes generaux et structures

    Energy Technology Data Exchange (ETDEWEB)

    Ben Ahmed, H.; Feld, G.; Multon, B. [Ecole Normale Superieure de Cachan, Lab. SATIE, Systemes et Applications des Technologies de l' Information et de l' Energie, UMR CNRS 8029, 94 (France); Bernard, N. [Institut Universitaire de Saint-Nazaire, Institut de Recherche en Electrotechnique et Electronique de Nantes Atlantique (IREENA), 44 - Nantes (France)

    2005-10-01

    Power generation is mainly performed by synchronous rotating machines which consume about a third of the world primary energy. Electric motors used in industrial applications convert about two thirds of this electricity. Therefore, synchronous machines are present everywhere at different scales, from micro-actuators of few micro-watts to thermo-mechanical production units of more than 1 GW, and represent a large variety of structures which have in common the synchronism between the frequency of the power supply currents and the relative movement of the fixed part with respect to the mobile part. Since several decades, these machines are more and more used as variable speed motors with permanent magnets. The advances in power electronics have contributed to the widening of their use in various applications with a huge range of powers. This article presents the general principle of operation of electromechanical converters of synchronous type: 1 - electromechanical conversion in electromagnetic systems: basic laws and elementary structures (elementary structure, energy conversion cycle, case of a system working in linear magnetic regime), rotating fields structure (magneto-motive force and Ferraris theorem, superficial air gap permeance, air gap magnetic induction, case of a permanent magnet inductor, magnetic energy and electromagnetic torque, conditions for reaching a non-null average torque, application to common cases); 2 - constitution, operation modes and efficiency: constitution and main types of synchronous machines, efficiency - analysis by similarity laws (other expression of the electromagnetic torque, thermal limitation in permanent regime, scale effects, effect of pole pairs number, examples of efficiencies and domains of use), operation modes. (J.S.)

  8. Dynamics and Thermodynamics of Molecular Machines

    DEFF Research Database (Denmark)

    Golubeva, Natalia

    2014-01-01

    to their microscopic size, molecular motors are governed by principles fundamentally different from those describing the operation of man-made motors such as car engines. In this dissertation the dynamic and thermodynamic properties of molecular machines are studied using the tools of nonequilibrium statistical......Molecular machines, or molecular motors, are small biophysical devices that perform a variety of essential metabolic processes such as DNA replication, protein synthesis and intracellular transport. Typically, these machines operate by converting chemical energy into motion and mechanical work. Due...... mechanics. The first part focuses on noninteracting molecular machines described by a paradigmatic continuum model with the aim of comparing and contrasting such a description to the one offered by the widely used discrete models. Many molecular motors, for example, kinesin involved in cellular cargo...

  9. Nonstandard scaling law of fluctuations in finite-size systems of globally coupled oscillators.

    Science.gov (United States)

    Nishikawa, Isao; Tanaka, Gouhei; Aihara, Kazuyuki

    2013-08-01

    Universal scaling laws form one of the central issues in physics. A nonstandard scaling law or a breakdown of a standard scaling law, on the other hand, can often lead to the finding of a new universality class in physical systems. Recently, we found that a statistical quantity related to fluctuations follows a nonstandard scaling law with respect to the system size in a synchronized state of globally coupled nonidentical phase oscillators [I. Nishikawa et al., Chaos 22, 013133 (2012)]. However, it is still unclear how widely this nonstandard scaling law is observed. In the present paper, we discuss the conditions required for the unusual scaling law in globally coupled oscillator systems and validate the conditions by numerical simulations of several different models.

  10. Multi-view L2-SVM and its multi-view core vector machine.

    Science.gov (United States)

    Huang, Chengquan; Chung, Fu-lai; Wang, Shitong

    2016-03-01

    In this paper, a novel L2-SVM based classifier Multi-view L2-SVM is proposed to address multi-view classification tasks. The proposed Multi-view L2-SVM classifier does not have any bias in its objective function and hence has the flexibility like μ-SVC in the sense that the number of the yielded support vectors can be controlled by a pre-specified parameter. The proposed Multi-view L2-SVM classifier can make full use of the coherence and the difference of different views through imposing the consensus among multiple views to improve the overall classification performance. Besides, based on the generalized core vector machine GCVM, the proposed Multi-view L2-SVM classifier is extended into its GCVM version MvCVM which can realize its fast training on large scale multi-view datasets, with its asymptotic linear time complexity with the sample size and its space complexity independent of the sample size. Our experimental results demonstrated the effectiveness of the proposed Multi-view L2-SVM classifier for small scale multi-view datasets and the proposed MvCVM classifier for large scale multi-view datasets. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. FRICTION - WELDING MACHINE AUTOMATIC CONTROL CIRCUIT DESIGN AND APPLICATION

    Directory of Open Access Journals (Sweden)

    Hakan ATEŞ

    2003-02-01

    Full Text Available In this work, automatic controllability of a laboratory-sized friction-welding machine has been investigated. The laboratory-sized friction-welding machine was composed of motor, brake, rotary and constant samples late pliers, and hydraulic unit. In automatic method, welding parameters such as friction time, friction pressure, forge time and forge pressure can be applied sensitively using time relays and contactors. At the end of the experimental study it's observed that automatic control system has been worked successfully.

  12. Efficient and robust pupil size and blink estimation from near-field video sequences for human-machine interaction.

    Science.gov (United States)

    Chen, Siyuan; Epps, Julien

    2014-12-01

    Monitoring pupil and blink dynamics has applications in cognitive load measurement during human-machine interaction. However, accurate, efficient, and robust pupil size and blink estimation pose significant challenges to the efficacy of real-time applications due to the variability of eye images, hence to date, require manual intervention for fine tuning of parameters. In this paper, a novel self-tuning threshold method, which is applicable to any infrared-illuminated eye images without a tuning parameter, is proposed for segmenting the pupil from the background images recorded by a low cost webcam placed near the eye. A convex hull and a dual-ellipse fitting method are also proposed to select pupil boundary points and to detect the eyelid occlusion state. Experimental results on a realistic video dataset show that the measurement accuracy using the proposed methods is higher than that of widely used manually tuned parameter methods or fixed parameter methods. Importantly, it demonstrates convenience and robustness for an accurate and fast estimate of eye activity in the presence of variations due to different users, task types, load, and environments. Cognitive load measurement in human-machine interaction can benefit from this computationally efficient implementation without requiring a threshold calibration beforehand. Thus, one can envisage a mini IR camera embedded in a lightweight glasses frame, like Google Glass, for convenient applications of real-time adaptive aiding and task management in the future.

  13. Steady-state numerical modeling of size effects in micron scale wire drawing

    DEFF Research Database (Denmark)

    Juul, Kristian Jørgensen; Nielsen, Kim Lau; Niordson, Christian Frithiof

    2017-01-01

    Wire drawing processes at the micron scale have received increased interest as micro wires are increasingly required in electrical components. It is well-established that size effects due to large strain gradient effects play an important role at this scale and the present study aims to quantify...... these effects for the wire drawing process. Focus will be on investigating the impact of size effects on the most favourable tool geometry (in terms of minimizing the drawing force) for various conditions between the wire/tool interface. The numerical analysis is based on a steady-state framework that enables...... convergence without dealing with the transient regime, but still fully accounts for the history dependence as-well as the elastic unloading. Thus, it forms the basis for a comprehensive parameter study. During the deformation process in wire drawing, large plastic strain gradients evolve in the contact region...

  14. Towards modeling intergranular stress corrosion cracks on grain size scales

    International Nuclear Information System (INIS)

    Simonovski, Igor; Cizelj, Leon

    2012-01-01

    Highlights: ► Simulating the onset and propagation of intergranular cracking. ► Model based on the as-measured geometry and crystallographic orientations. ► Feasibility, performance of the proposed computational approach demonstrated. - Abstract: Development of advanced models at the grain size scales has so far been mostly limited to simulated geometry structures such as for example 3D Voronoi tessellations. The difficulty came from a lack of non-destructive techniques for measuring the microstructures. In this work a novel grain-size scale approach for modelling intergranular stress corrosion cracking based on as-measured 3D grain structure of a 400 μm stainless steel wire is presented. Grain topologies and crystallographic orientations are obtained using a diffraction contrast tomography, reconstructed within a detailed finite element model and coupled with advanced constitutive models for grains and grain boundaries. The wire is composed of 362 grains and over 1600 grain boundaries. Grain boundary damage initialization and early development is then explored for a number of cases, ranging from isotropic elasticity up to crystal plasticity constitutive laws for the bulk grain material. In all cases the grain boundaries are modeled using the cohesive zone approach. The feasibility of the approach is explored.

  15. Reliability assessment of the fueling machine of the CANDU reactor

    International Nuclear Information System (INIS)

    Al-Kusayer, T.A.

    1985-01-01

    Fueling of CANDU-reactors is carried out by two fueling machines, each serving one end of the reactor. The fueling machine becomes a part of the primary heat transport system during the refueling operations, and hence, some refueling machine malfunctions could result in a small scale-loss-of-coolant accident. Fueling machine failures and the failure sequences are discussed. The unavailability of the fueling machine is estimated by using fault tree analysis. The probability of mechanical failure of the fueling machine interface is estimated as 1.08 x 10 -5 . (orig.) [de

  16. Electric machines

    CERN Document Server

    Gross, Charles A

    2006-01-01

    BASIC ELECTROMAGNETIC CONCEPTSBasic Magnetic ConceptsMagnetically Linear Systems: Magnetic CircuitsVoltage, Current, and Magnetic Field InteractionsMagnetic Properties of MaterialsNonlinear Magnetic Circuit AnalysisPermanent MagnetsSuperconducting MagnetsThe Fundamental Translational EM MachineThe Fundamental Rotational EM MachineMultiwinding EM SystemsLeakage FluxThe Concept of Ratings in EM SystemsSummaryProblemsTRANSFORMERSThe Ideal n-Winding TransformerTransformer Ratings and Per-Unit ScalingThe Nonideal Three-Winding TransformerThe Nonideal Two-Winding TransformerTransformer Efficiency and Voltage RegulationPractical ConsiderationsThe AutotransformerOperation of Transformers in Three-Phase EnvironmentsSequence Circuit Models for Three-Phase Transformer AnalysisHarmonics in TransformersSummaryProblemsBASIC MECHANICAL CONSIDERATIONSSome General PerspectivesEfficiencyLoad Torque-Speed CharacteristicsMass Polar Moment of InertiaGearingOperating ModesTranslational SystemsA Comprehensive Example: The ElevatorP...

  17. Overcoming time scale and finite size limitations to compute nucleation rates from small scale well tempered metadynamics simulations

    Science.gov (United States)

    Salvalaglio, Matteo; Tiwary, Pratyush; Maggioni, Giovanni Maria; Mazzotti, Marco; Parrinello, Michele

    2016-12-01

    Condensation of a liquid droplet from a supersaturated vapour phase is initiated by a prototypical nucleation event. As such it is challenging to compute its rate from atomistic molecular dynamics simulations. In fact at realistic supersaturation conditions condensation occurs on time scales that far exceed what can be reached with conventional molecular dynamics methods. Another known problem in this context is the distortion of the free energy profile associated to nucleation due to the small, finite size of typical simulation boxes. In this work the problem of time scale is addressed with a recently developed enhanced sampling method while contextually correcting for finite size effects. We demonstrate our approach by studying the condensation of argon, and showing that characteristic nucleation times of the order of magnitude of hours can be reliably calculated. Nucleation rates spanning a range of 10 orders of magnitude are computed at moderate supersaturation levels, thus bridging the gap between what standard molecular dynamics simulations can do and real physical systems.

  18. The scaling of urban surface water abundance and impairment with city size

    Science.gov (United States)

    Steele, M. K.

    2018-03-01

    Urbanization alters surface water compared to nonurban landscapes, yet little is known regarding how basic aquatic ecosystem characteristics, such as the abundance and impairment of surface water, differ with population size or regional context. This study examined the abundance, scaling, and impairment of surface water by quantifying the stream length, water body area, and impaired stream length for 3520 cities in the United States with populations from 2500 to 18 million. Stream length, water body area, and impaired stream length were quantified using the National Hydrography Dataset and the EPA's 303(d) list. These metrics were scaled with population and city area using single and piecewise power-law models and related to biophysical factors (precipitation, topography) and land cover. Results show that abundance of stream length and water body area in cities actually increases with city area; however, the per person abundance decreases with population size. Relative to population, impaired stream length did not increase until city populations were > 25,000 people, then scaled linearly with population. Some variation in abundance and impairment was explained by biophysical context and land cover. Development intensity correlated with stream density and impairment; however, those relationships depended on the orientation of the land covers. When high intensity development occupied the local elevation highs (+ 15 m) and undeveloped land the elevation lows, the percentage of impaired streams was less than the opposite land cover orientation (- 15 m) or very flat land. These results show that surface water abundance and impairment across contiguous US cities are influenced by city size and by biophysical setting interacting with land cover intensity.

  19. Micro Fine Sized Palm Oil Fuel Ash Produced Using a Wind Tunnel Production System

    Directory of Open Access Journals (Sweden)

    R. Ahmadi

    2016-01-01

    Full Text Available Micro fine sized palm oil fuel ash (POFA is a new supplementary cementitious material that can increase the strength, durability, and workability of concrete. However, production of this material incurs high cost and is not practical for the construction industry. This paper investigates a simple methodology of producing micro fine sized POFA by means of a laboratory scale wind tunnel system. The raw POFA obtained from an oil palm factory is first calcined to remove carbon residue and then grinded in Los Angeles abrasion machine. The grinded POFA is then blown in the fabricated wind tunnel system for separation into different ranges of particle sizes. The physical, morphological, and chemical properties of the micro fine sized POFA were then investigated using Laser Particle Size Analyser (PSA, nitrogen sorption, and Scanning Electron Microscopy with Energy Dispersive X-Ray (SEM-EDX. A total of 32.1% micro fine sized POFA were collected from each sample blown, with the size range of 1–10 micrometers. The devised laboratory scale of wind tunnel production system is successful in producing micro fine sized POFA and, with modifications, this system is envisaged applicable to be used to commercialize micro fine sized POFA production for the construction industry.

  20. Improved Saturated Hydraulic Conductivity Pedotransfer Functions Using Machine Learning Methods

    Science.gov (United States)

    Araya, S. N.; Ghezzehei, T. A.

    2017-12-01

    Saturated hydraulic conductivity (Ks) is one of the fundamental hydraulic properties of soils. Its measurement, however, is cumbersome and instead pedotransfer functions (PTFs) are often used to estimate it. Despite a lot of progress over the years, generic PTFs that estimate hydraulic conductivity generally don't have a good performance. We develop significantly improved PTFs by applying state of the art machine learning techniques coupled with high-performance computing on a large database of over 20,000 soils—USKSAT and the Florida Soil Characterization databases. We compared the performance of four machine learning algorithms (k-nearest neighbors, gradient boosted model, support vector machine, and relevance vector machine) and evaluated the relative importance of several soil properties in explaining Ks. An attempt is also made to better account for soil structural properties; we evaluated the importance of variables derived from transformations of soil water retention characteristics and other soil properties. The gradient boosted models gave the best performance with root mean square errors less than 0.7 and mean errors in the order of 0.01 on a log scale of Ks [cm/h]. The effective particle size, D10, was found to be the single most important predictor. Other important predictors included percent clay, bulk density, organic carbon percent, coefficient of uniformity and values derived from water retention characteristics. Model performances were consistently better for Ks values greater than 10 cm/h. This study maximizes the extraction of information from a large database to develop generic machine learning based PTFs to estimate Ks. The study also evaluates the importance of various soil properties and their transformations in explaining Ks.

  1. SEAMLESS TECHNOLOGY ON CIRCULAR KNITTING MACHINES

    Directory of Open Access Journals (Sweden)

    CRETU Viorica

    2014-05-01

    Full Text Available With industrial progress, the advancements in garment manufacturing have evolved from cut & sew to complete garment knitting, which produces one entire garment without sewing or linking process. Seamless knitting technology is similar to sock manufacture, the specialized circular knitting machines producing 3 dimensional garments with no side seams, with the waistband integrated with body of the garment and with knitted washing instructions and logos. The paper starts by presenting the main advantages of seamless garments but also some limitations because the technology. Because for a seamless garment, which is realized as a knitted tub, is very important to ensure the required final chest size, it was presented the main components involved: the knitting machine, the garment design and the yarns used. The knitting machines, beside the values of diameters and gauges with a great impact on the chest size, are characterized by a very innovative and complex construction. The design of a seamless garment is fundamental different compared to garments produced on a traditional way because the designer must to work backwards from a finished garment to create the knitting programme that will ultimately give the correct finished size. On the end of the paper it was presented some of the applications of the seamless products that cover intimate apparel and other bodywear, outwear, activewear and functional sportswear, upholstery, industrial, automotive and medical textiles.

  2. PRISMA database machine: A distributed, main-memory approach

    NARCIS (Netherlands)

    Schmidt, J.W.; Apers, Peter M.G.; Ceri, S.; Kersten, Martin L.; Oerlemans, Hans C.M.; Missikoff, M.

    1988-01-01

    The PRISMA project is a large-scale research effort in the design and implementation of a highly parallel machine for data and knowledge processing. The PRISMA database machine is a distributed, main-memory database management system implemented in an object-oriented language that runs on top of a

  3. Turbulent Concentration of mm-Size Particles in the Protoplanetary Nebula: Scale-Dependent Cascades

    Science.gov (United States)

    Cuzzi, J. N.; Hartlep, T.

    2015-01-01

    The initial accretion of primitive bodies (here, asteroids in particular) from freely-floating nebula particles remains problematic. Traditional growth-by-sticking models encounter a formidable "meter-size barrier" (or even a mm-to-cm-size barrier) in turbulent nebulae, making the preconditions for so-called "streaming instabilities" difficult to achieve even for so-called "lucky" particles. Even if growth by sticking could somehow breach the meter size barrier, turbulent nebulae present further obstacles through the 1-10km size range. On the other hand, nonturbulent nebulae form large asteroids too quickly to explain long spreads in formation times, or the dearth of melted asteroids. Theoretical understanding of nebula turbulence is itself in flux; recent models of MRI (magnetically-driven) turbulence favor low-or- no-turbulence environments, but purely hydrodynamic turbulence is making a comeback, with two recently discovered mechanisms generating robust turbulence which do not rely on magnetic fields at all. An important clue regarding planetesimal formation is an apparent 100km diameter peak in the pre-depletion, pre-erosion mass distribution of asteroids; scenarios leading directly from independent nebula particulates to large objects of this size, which avoid the problematic m-km size range, could be called "leapfrog" scenarios. The leapfrog scenario we have studied in detail involves formation of dense clumps of aerodynamically selected, typically mm-size particles in turbulence, which can under certain conditions shrink inexorably on 100-1000 orbit timescales and form 10-100km diameter sandpile planetesimals. There is evidence that at least the ordinary chondrite parent bodies were initially composed entirely of a homogeneous mix of such particles. Thus, while they are arcane, turbulent concentration models acting directly on chondrule size particles are worthy of deeper study. The typical sizes of planetesimals and the rate of their formation can be

  4. Size effect studies on notched tensile specimens at room temperature and 400 {sup o}C

    Energy Technology Data Exchange (ETDEWEB)

    Krompholz, K.; Kamber, J.; Groth, E.; Kalkhof, D

    2000-07-01

    One of the objectives of the REVISA project (REactor Vessel Integrity in Severe Accidents) is to assess the size effect related to deformation and failure models as well as material data under quasistatic and dynamic conditions in homogeneous and non-homogeneous states of strain. For these investigations the reactor pressure vessel material 20 MnMoNi 55 was selected. It was subjected to a size effect study on notched scaled tensile specimens of three sizes. Two strain rates (2*10{sup -5}/s and 10{sup -3}/s) and two temperatures (room temperature and 400 {sup o}C) were selected. The investigations are aimed at a support for a gradient plasticity approach to size effects. Test on the small specimens (diameters 2.4 and 7.2 mm) were performed at an electromechanical test machine, while the large specimens (diameter 24 mm) had to be tested at a servohydraulical closed loop test machine with a force capacity of 1000 kN. All characteristic values were found to be size dependent. A selected semicircular notch retains its shape. The notch opening becomes a chord of a segment of a circle, the notch shape at fracture is a segment of a circle. (author)

  5. Constant size descriptors for accurate machine learning models of molecular properties

    Science.gov (United States)

    Collins, Christopher R.; Gordon, Geoffrey J.; von Lilienfeld, O. Anatole; Yaron, David J.

    2018-06-01

    Two different classes of molecular representations for use in machine learning of thermodynamic and electronic properties are studied. The representations are evaluated by monitoring the performance of linear and kernel ridge regression models on well-studied data sets of small organic molecules. One class of representations studied here counts the occurrence of bonding patterns in the molecule. These require only the connectivity of atoms in the molecule as may be obtained from a line diagram or a SMILES string. The second class utilizes the three-dimensional structure of the molecule. These include the Coulomb matrix and Bag of Bonds, which list the inter-atomic distances present in the molecule, and Encoded Bonds, which encode such lists into a feature vector whose length is independent of molecular size. Encoded Bonds' features introduced here have the advantage of leading to models that may be trained on smaller molecules and then used successfully on larger molecules. A wide range of feature sets are constructed by selecting, at each rank, either a graph or geometry-based feature. Here, rank refers to the number of atoms involved in the feature, e.g., atom counts are rank 1, while Encoded Bonds are rank 2. For atomization energies in the QM7 data set, the best graph-based feature set gives a mean absolute error of 3.4 kcal/mol. Inclusion of 3D geometry substantially enhances the performance, with Encoded Bonds giving 2.4 kcal/mol, when used alone, and 1.19 kcal/mol, when combined with graph features.

  6. Generic finite size scaling for discontinuous nonequilibrium phase transitions into absorbing states

    Science.gov (United States)

    de Oliveira, M. M.; da Luz, M. G. E.; Fiore, C. E.

    2015-12-01

    Based on quasistationary distribution ideas, a general finite size scaling theory is proposed for discontinuous nonequilibrium phase transitions into absorbing states. Analogously to the equilibrium case, we show that quantities such as response functions, cumulants, and equal area probability distributions all scale with the volume, thus allowing proper estimates for the thermodynamic limit. To illustrate these results, five very distinct lattice models displaying nonequilibrium transitions—to single and infinitely many absorbing states—are investigated. The innate difficulties in analyzing absorbing phase transitions are circumvented through quasistationary simulation methods. Our findings (allied to numerical studies in the literature) strongly point to a unifying discontinuous phase transition scaling behavior for equilibrium and this important class of nonequilibrium systems.

  7. Latent hardening size effect in small-scale plasticity

    Science.gov (United States)

    Bardella, Lorenzo; Segurado, Javier; Panteghini, Andrea; Llorca, Javier

    2013-07-01

    We aim at understanding the multislip behaviour of metals subject to irreversible deformations at small-scales. By focusing on the simple shear of a constrained single-crystal strip, we show that discrete Dislocation Dynamics (DD) simulations predict a strong latent hardening size effect, with smaller being stronger in the range [1.5 µm, 6 µm] for the strip height. We attempt to represent the DD pseudo-experimental results by developing a flow theory of Strain Gradient Crystal Plasticity (SGCP), involving both energetic and dissipative higher-order terms and, as a main novelty, a strain gradient extension of the conventional latent hardening. In order to discuss the capability of the SGCP theory proposed, we implement it into a Finite Element (FE) code and set its material parameters on the basis of the DD results. The SGCP FE code is specifically developed for the boundary value problem under study so that we can implement a fully implicit (Backward Euler) consistent algorithm. Special emphasis is placed on the discussion of the role of the material length scales involved in the SGCP model, from both the mechanical and numerical points of view.

  8. Latent hardening size effect in small-scale plasticity

    International Nuclear Information System (INIS)

    Bardella, Lorenzo; Panteghini, Andrea; Segurado, Javier; Llorca, Javier

    2013-01-01

    We aim at understanding the multislip behaviour of metals subject to irreversible deformations at small-scales. By focusing on the simple shear of a constrained single-crystal strip, we show that discrete Dislocation Dynamics (DD) simulations predict a strong latent hardening size effect, with smaller being stronger in the range [1.5 µm, 6 µm] for the strip height. We attempt to represent the DD pseudo-experimental results by developing a flow theory of Strain Gradient Crystal Plasticity (SGCP), involving both energetic and dissipative higher-order terms and, as a main novelty, a strain gradient extension of the conventional latent hardening. In order to discuss the capability of the SGCP theory proposed, we implement it into a Finite Element (FE) code and set its material parameters on the basis of the DD results. The SGCP FE code is specifically developed for the boundary value problem under study so that we can implement a fully implicit (Backward Euler) consistent algorithm. Special emphasis is placed on the discussion of the role of the material length scales involved in the SGCP model, from both the mechanical and numerical points of view. (paper)

  9. Machine learning topological states

    Science.gov (United States)

    Deng, Dong-Ling; Li, Xiaopeng; Das Sarma, S.

    2017-11-01

    Artificial neural networks and machine learning have now reached a new era after several decades of improvement where applications are to explode in many fields of science, industry, and technology. Here, we use artificial neural networks to study an intriguing phenomenon in quantum physics—the topological phases of matter. We find that certain topological states, either symmetry-protected or with intrinsic topological order, can be represented with classical artificial neural networks. This is demonstrated by using three concrete spin systems, the one-dimensional (1D) symmetry-protected topological cluster state and the 2D and 3D toric code states with intrinsic topological orders. For all three cases, we show rigorously that the topological ground states can be represented by short-range neural networks in an exact and efficient fashion—the required number of hidden neurons is as small as the number of physical spins and the number of parameters scales only linearly with the system size. For the 2D toric-code model, we find that the proposed short-range neural networks can describe the excited states with Abelian anyons and their nontrivial mutual statistics as well. In addition, by using reinforcement learning we show that neural networks are capable of finding the topological ground states of nonintegrable Hamiltonians with strong interactions and studying their topological phase transitions. Our results demonstrate explicitly the exceptional power of neural networks in describing topological quantum states, and at the same time provide valuable guidance to machine learning of topological phases in generic lattice models.

  10. Scaling of heavy ion beam probes for reactor-size devices

    International Nuclear Information System (INIS)

    Hickok, R.L.; Jennings, W.C.; Connor, K.A.; Schoch, P.M.

    1984-01-01

    Heavy ion beam probes for reactor-size plasma devices will require beam energies of approximately 10 MeV. Although accelerator technology appears to be available, beam deflection systems and parallel plate energy analyzers present severe difficulties if existing technology is scaled in a straightforward manner. We propose a different operating mode which will use a fixed beam trajectory and multiple cylindrical energy analyzers. Development effort will still be necessary, but we believe the basic technology is available

  11. Two-motor single-inverter field-oriented induction machine drive ...

    Indian Academy of Sciences (India)

    Multi-machine, single-inverter induction motor drives are attractive in situations in which all machines are of similar ratings, and operate at approximately the same load torques. The advantages include small size compared to multi-inverter system, lower weight and overall cost. However, field oriented control of such drives ...

  12. Automatic detection of ischemic stroke based on scaling exponent electroencephalogram using extreme learning machine

    Science.gov (United States)

    Adhi, H. A.; Wijaya, S. K.; Prawito; Badri, C.; Rezal, M.

    2017-03-01

    Stroke is one of cerebrovascular diseases caused by the obstruction of blood flow to the brain. Stroke becomes the leading cause of death in Indonesia and the second in the world. Stroke also causes of the disability. Ischemic stroke accounts for most of all stroke cases. Obstruction of blood flow can cause tissue damage which results the electrical changes in the brain that can be observed through the electroencephalogram (EEG). In this study, we presented the results of automatic detection of ischemic stroke and normal subjects based on the scaling exponent EEG obtained through detrended fluctuation analysis (DFA) using extreme learning machine (ELM) as the classifier. The signal processing was performed with 18 channels of EEG in the range of 0-30 Hz. Scaling exponents of the subjects were used as the input for ELM to classify the ischemic stroke. The performance of detection was observed by the value of accuracy, sensitivity and specificity. The result showed, performance of the proposed method to classify the ischemic stroke was 84 % for accuracy, 82 % for sensitivity and 87 % for specificity with 120 hidden neurons and sine as the activation function of ELM.

  13. Finite size scaling of the Higgs-Yukawa model near the Gaussian fixed point

    Energy Technology Data Exchange (ETDEWEB)

    Chu, David Y.J.; Lin, C.J. David [National Chiao-Tung Univ., Hsinchu, Taiwan (China); Jansen, Karl [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Knippschild, Bastian [HISKP, Bonn (Germany); Nagy, Attila [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humboldt-Univ. Berlin (Germany)

    2016-12-15

    We study the scaling properties of Higgs-Yukawa models. Using the technique of Finite-Size Scaling, we are able to derive scaling functions that describe the observables of the model in the vicinity of a Gaussian fixed point. A feasibility study of our strategy is performed for the pure scalar theory in the weak-coupling regime. Choosing the on-shell renormalisation scheme gives us an advantage to fit the scaling functions against lattice data with only a small number of fit parameters. These formulae can be used to determine the universality of the observed phase transitions, and thus play an essential role in future investigations of Higgs-Yukawa models, in particular in the strong Yukawa coupling region.

  14. Machine-Learning Research

    OpenAIRE

    Dietterich, Thomas G.

    1997-01-01

    Machine-learning research has been making great progress in many directions. This article summarizes four of these directions and discusses some current open problems. The four directions are (1) the improvement of classification accuracy by learning ensembles of classifiers, (2) methods for scaling up supervised learning algorithms, (3) reinforcement learning, and (4) the learning of complex stochastic models.

  15. A finite size scaling test of an SU(2) gauge-spin system

    International Nuclear Information System (INIS)

    Tomiya, M.; Hattori, T.

    1984-01-01

    We calculate the correlation functions in the SU(2) gauge-spin system with spins in the fundamental representation. We analyze the result making use of finite size scaling. There is a possibility that there are no second order phase transition lines in this model, contrary to previous assertions. (orig.)

  16. Asymmetric fluid criticality. II. Finite-size scaling for simulations.

    Science.gov (United States)

    Kim, Young C; Fisher, Michael E

    2003-10-01

    The vapor-liquid critical behavior of intrinsically asymmetric fluids is studied in finite systems of linear dimensions L focusing on periodic boundary conditions, as appropriate for simulations. The recently propounded "complete" thermodynamic (L--> infinity) scaling theory incorporating pressure mixing in the scaling fields as well as corrections to scaling [Phys. Rev. E 67, 061506 (2003)] is extended to finite L, initially in a grand canonical representation. The theory allows for a Yang-Yang anomaly in which, when L--> infinity, the second temperature derivative (d2musigma/dT2) of the chemical potential along the phase boundary musigmaT diverges when T-->Tc-. The finite-size behavior of various special critical loci in the temperature-density or (T,rho) plane, in particular, the k-inflection susceptibility loci and the Q-maximal loci--derived from QL(T,L) is identical with 2L/L where m is identical with rho-L--is carefully elucidated and shown to be of value in estimating Tc and rhoc. Concrete illustrations are presented for the hard-core square-well fluid and for the restricted primitive model electrolyte including an estimate of the correlation exponent nu that confirms Ising-type character. The treatment is extended to the canonical representation where further complications appear.

  17. Scikit-learn: Machine Learning in Python

    OpenAIRE

    Pedregosa, Fabian; Varoquaux, Gaël; Gramfort, Alexandre; Michel, Vincent; Thirion, Bertrand; Grisel, Olivier; Blondel, Mathieu; Prettenhofer, Peter; Weiss, Ron; Dubourg, Vincent; Vanderplas, Jake; Passos, Alexandre; Cournapeau, David; Brucher, Matthieu; Perrot, Matthieu

    2011-01-01

    International audience; Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic ...

  18. Scikit-learn: Machine Learning in Python

    OpenAIRE

    Pedregosa, Fabian; Varoquaux, Gaël; Gramfort, Alexandre; Michel, Vincent; Thirion, Bertrand; Grisel, Olivier; Blondel, Mathieu; Louppe, Gilles; Prettenhofer, Peter; Weiss, Ron; Dubourg, Vincent; Vanderplas, Jake; Passos, Alexandre; Cournapeau, David; Brucher, Matthieu

    2012-01-01

    Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic and commercial settings....

  19. DESIGN EVALUATIONS OF DOUBLE ROTOR SWITCHED RELUCTANCE MACHINE

    Directory of Open Access Journals (Sweden)

    C.V. ARAVIND

    2016-02-01

    Full Text Available The absence of magnets makes the reluctance machine typical for low cogging operations with the torque depending on the stator rotor interaction area. The air gap between stator pole and rotor pole gives a huge effect on the reluctance variation. The primitive double rotor switched reluctance machine lags to improvise the effect of the ripple value though the torque density is higher compared to conventional machines. An optimised circular hole position and dimensioned in the stator pole of lowers the torque ripple and reduce the acoustic noise as presented in this paper. A comparative evaluation of the conventional double rotor machine with this improved structure is done through numerical design and evaluations for the same sizing. It is found that the motor constant square density. It is found that the double rotor switched reluctance machine is improved by 140% to conventional machine.

  20. Tempo in electronic gaming machines affects behavior among at-risk gamblers.

    Science.gov (United States)

    Mentzoni, Rune A; Laberg, Jon Christian; Brunborg, Geir Scott; Molde, Helge; Pallesen, Ståle

    2012-09-01

    Background and aims Electronic gaming machines (EGM) may be a particularly addictive form of gambling, and gambling speed is believed to contribute to the addictive potential of such machines. The aim of the current study was to generate more knowledge concerning speed as a structural characteristic in gambling, by comparing the effects of three different bet-to-outcome intervals (BOI) on gamblers bet-sizes, game evaluations and illusion of control during gambling on a computer simulated slot machine. Furthermore, we investigated whether problem gambling moderates effects of BOI on gambling behavior and cognitions. Methods 62 participants played a computerized slot machine with either fast (400 ms), medium (1700 ms) or slow (3000 ms) BOI. SOGS-R was used to measure pre-existing gambling problems. Mean bet size, game evaluations and illusion of control comprised the dependent variables. Results Gambling speed had no overall effect on either mean bet size, game evaluations or illusion of control, but in the 400 ms condition, at-risk gamblers (SOGS-R score > 0) employed higher bet sizes compared to no-risk (SOGS-R score = 0) gamblers. Conclusions The findings corroborate and elaborate on previous studies and indicate that restrictions on gambling speed may serve as a harm reducing effort for at-risk gamblers.

  1. Characteristic length scale of input data in distributed models: implications for modeling grid size

    Science.gov (United States)

    Artan, G. A.; Neale, C. M. U.; Tarboton, D. G.

    2000-01-01

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.

  2. Characteristic length scale of input data in distributed models: implications for modeling grain size

    Science.gov (United States)

    Artan, Guleid A.; Neale, C. M. U.; Tarboton, D. G.

    2000-01-01

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.

  3. Programming and machining of complex parts based on CATIA solid modeling

    Science.gov (United States)

    Zhu, Xiurong

    2017-09-01

    The complex parts of the use of CATIA solid modeling programming and simulation processing design, elaborated in the field of CNC machining, programming and the importance of processing technology. In parts of the design process, first make a deep analysis on the principle, and then the size of the design, the size of each chain, connected to each other. After the use of backstepping and a variety of methods to calculate the final size of the parts. In the selection of parts materials, careful study, repeated testing, the final choice of 6061 aluminum alloy. According to the actual situation of the processing site, it is necessary to make a comprehensive consideration of various factors in the machining process. The simulation process should be based on the actual processing, not only pay attention to shape. It can be used as reference for machining.

  4. Multiple atomic scale solid surface interconnects for atom circuits and molecule logic gates

    International Nuclear Information System (INIS)

    Joachim, C; Martrou, D; Gauthier, S; Rezeq, M; Troadec, C; Jie Deng; Chandrasekhar, N

    2010-01-01

    The scientific and technical challenges involved in building the planar electrical connection of an atomic scale circuit to N electrodes (N > 2) are discussed. The practical, laboratory scale approach explored today to assemble a multi-access atomic scale precision interconnection machine is presented. Depending on the surface electronic properties of the targeted substrates, two types of machines are considered: on moderate surface band gap materials, scanning tunneling microscopy can be combined with scanning electron microscopy to provide an efficient navigation system, while on wide surface band gap materials, atomic force microscopy can be used in conjunction with optical microscopy. The size of the planar part of the circuit should be minimized on moderate band gap surfaces to avoid current leakage, while this requirement does not apply to wide band gap surfaces. These constraints impose different methods of connection, which are thoroughly discussed, in particular regarding the recent progress in single atom and molecule manipulations on a surface.

  5. Model-based machine learning.

    Science.gov (United States)

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.

  6. Design of water-repellant coating using dual scale size of hybrid silica nanoparticles on polymer surface

    Science.gov (United States)

    Conti, J.; De Coninck, J.; Ghazzal, M. N.

    2018-04-01

    The dual-scale size of the silica nanoparticles is commonly aimed at producing dual-scale roughness, also called hierarchical roughness (Lotus effect). In this study, we describe a method to build a stable water-repellant coating with controlled roughness. Hybrid silica nanoparticles are self-assembled over a polymeric surface by alternating consecutive layers. Each one uses homogenously distributed silica nanoparticles of a particular size. The effect of the nanoparticle size of the first layer on the final roughness of the coating is studied. The first layer enables to adjust the distance between the silica nanoparticles of the upper layer, leading to a tuneable and controlled final roughness. An optimal size nanoparticle has been found for higher water-repellency. Furthermore, the stability of the coating on polymeric surface (Polycarbonate substrate) is ensured by photopolymerization of hybridized silica nanoparticles using Vinyl functional groups.

  7. Perspex machine: V. Compilation of C programs

    Science.gov (United States)

    Spanner, Matthew P.; Anderson, James A. D. W.

    2006-01-01

    The perspex machine arose from the unification of the Turing machine with projective geometry. The original, constructive proof used four special, perspective transformations to implement the Turing machine in projective geometry. These four transformations are now generalised and applied in a compiler, implemented in Pop11, that converts a subset of the C programming language into perspexes. This is interesting both from a geometrical and a computational point of view. Geometrically, it is interesting that program source can be converted automatically to a sequence of perspective transformations and conditional jumps, though we find that the product of homogeneous transformations with normalisation can be non-associative. Computationally, it is interesting that program source can be compiled for a Reduced Instruction Set Computer (RISC), the perspex machine, that is a Single Instruction, Zero Exception (SIZE) computer.

  8. Effects of chlorpyrifos on soil carboxylesterase activity at an aggregate-size scale.

    Science.gov (United States)

    Sanchez-Hernandez, Juan C; Sandoval, Marco

    2017-08-01

    The impact of pesticides on extracellular enzyme activity has been mostly studied on the bulk soil scale, and our understanding of the impact on an aggregate-size scale remains limited. Because microbial processes, and their extracellular enzyme production, are dependent on the size of soil aggregates, we hypothesized that the effect of pesticides on enzyme activities is aggregate-size specific. We performed three experiments using an Andisol to test the interaction between carboxylesterase (CbE) activity and the organophosphorus (OP) chlorpyrifos. First, we compared esterase activity among aggregates of different size spiked with chlorpyrifos (10mgkg -1 wet soil). Next, we examined the inhibition of CbE activity by chlorpyrifos and its metabolite chlorpyrifos-oxon in vitro to explore the aggregate size-dependent affinity of the pesticides for the active site of the enzyme. Lastly, we assessed the capability of CbEs to alleviate chlorpyrifos toxicity upon soil microorganisms. Our principal findings were: 1) CbE activity was significantly inhibited (30-67% of controls) in the microaggregates (1.0mm) compared with the corresponding controls (i.e., pesticide-free aggregates), 2) chlorpyrifos-oxon was a more potent CbE inhibitor than chlorpyrifos; however, no significant differences in the CbE inhibition were found between micro- and macroaggregates, and 3) dose-response relationships between CbE activity and chlorpyrifos concentrations revealed the capability of the enzyme to bind chlorpyrifos-oxon, which was dependent on the time of exposure. This chemical interaction resulted in a safeguarding mechanism against chlorpyrifos-oxon toxicity on soil microbial activity, as evidenced by the unchanged activity of dehydrogenase and related extracellular enzymes in the pesticide-treated aggregates. Taken together, these results suggest that environmental risk assessments of OP-polluted soils should consider the fractionation of soil in aggregates of different size to measure

  9. Scale effects between body size and limb design in quadrupedal mammals.

    Science.gov (United States)

    Kilbourne, Brandon M; Hoffman, Louwrens C

    2013-01-01

    Recently the metabolic cost of swinging the limbs has been found to be much greater than previously thought, raising the possibility that limb rotational inertia influences the energetics of locomotion. Larger mammals have a lower mass-specific cost of transport than smaller mammals. The scaling of the mass-specific cost of transport is partly explained by decreasing stride frequency with increasing body size; however, it is unknown if limb rotational inertia also influences the mass-specific cost of transport. Limb length and inertial properties--limb mass, center of mass (COM) position, moment of inertia, radius of gyration, and natural frequency--were measured in 44 species of terrestrial mammals, spanning eight taxonomic orders. Limb length increases disproportionately with body mass via positive allometry (length ∝ body mass(0.40)); the positive allometry of limb length may help explain the scaling of the metabolic cost of transport. When scaled against body mass, forelimb inertial properties, apart from mass, scale with positive allometry. Fore- and hindlimb mass scale according to geometric similarity (limb mass ∝ body mass(1.0)), as do the remaining hindlimb inertial properties. The positive allometry of limb length is largely the result of absolute differences in limb inertial properties between mammalian subgroups. Though likely detrimental to locomotor costs in large mammals, scale effects in limb inertial properties appear to be concomitant with scale effects in sensorimotor control and locomotor ability in terrestrial mammals. Across mammals, the forelimb's potential for angular acceleration scales according to geometric similarity, whereas the hindlimb's potential for angular acceleration scales with positive allometry.

  10. Heat-Assisted Machining for Material Removal Improvement

    Science.gov (United States)

    Mohd Hadzley, A. B.; Hafiz, S. Muhammad; Azahar, W.; Izamshah, R.; Mohd Shahir, K.; Abu, A.

    2015-09-01

    Heat assisted machining (HAM) is a process where an intense heat source is used to locally soften the workpiece material before machined by high speed cutting tool. In this paper, an HAM machine is developed by modification of small CNC machine with the addition of special jig to hold the heat sources in front of the machine spindle. Preliminary experiment to evaluate the capability of HAM machine to produce groove formation for slotting process was conducted. A block AISI D2 tool steel with100mm (width) × 100mm (length) × 20mm (height) size has been cut by plasma heating with different setting of arc current, feed rate and air pressure. Their effect has been analyzed based on distance of cut (DOC).Experimental results demonstrated the most significant factor that contributed to the DOC is arc current, followed by the feed rate and air pressure. HAM improves the slotting process of AISI D2 by increasing distance of cut due to initial cutting groove that formed during thermal melting and pressurized air from the heat source.

  11. Computerized Machine for Cutting Space Shuttle Thermal Tiles

    Science.gov (United States)

    Ramirez, Luis E.; Reuter, Lisa A.

    2009-01-01

    A report presents the concept of a machine aboard the space shuttle that would cut oversized thermal-tile blanks to precise sizes and shapes needed to replace tiles that were damaged or lost during ascent to orbit. The machine would include a computer-controlled jigsaw enclosed in a clear acrylic shell that would prevent escape of cutting debris. A vacuum motor would collect the debris into a reservoir and would hold a tile blank securely in place. A database stored in the computer would contain the unique shape and dimensions of every tile. Once a broken or missing tile was identified, its identification number would be entered into the computer, wherein the cutting pattern associated with that number would be retrieved from the database. A tile blank would be locked into a crib in the machine, the shell would be closed (proximity sensors would prevent activation of the machine while the shell was open), and a "cut" command would be sent from the computer. A blade would be moved around the crib like a plotter, cutting the tile to the required size and shape. Once the tile was cut, an astronaut would take a space walk for installation.

  12. Size-selective sorting in bubble streaming flows: Particle migration on fast time scales

    Science.gov (United States)

    Thameem, Raqeeb; Rallabandi, Bhargav; Hilgenfeldt, Sascha

    2015-11-01

    Steady streaming from ultrasonically driven microbubbles is an increasingly popular technique in microfluidics because such devices are easily manufactured and generate powerful and highly controllable flows. Combining streaming and Poiseuille transport flows allows for passive size-sensitive sorting at particle sizes and selectivities much smaller than the bubble radius. The crucial particle deflection and separation takes place over very small times (milliseconds) and length scales (20-30 microns) and can be rationalized using a simplified geometric mechanism. A quantitative theoretical description is achieved through the application of recent results on three-dimensional streaming flow field contributions. To develop a more fundamental understanding of the particle dynamics, we use high-speed photography of trajectories in polydisperse particle suspensions, recording the particle motion on the time scale of the bubble oscillation. Our data reveal the dependence of particle displacement on driving phase, particle size, oscillatory flow speed, and streaming speed. With this information, the effective repulsive force exerted by the bubble on the particle can be quantified, showing for the first time how fast, selective particle migration is effected in a streaming flow. We acknowledge support by the National Science Foundation under grant number CBET-1236141.

  13. Axial flux permanent magnet brushless machines

    CERN Document Server

    Gieras, Jacek F; Kamper, Maarten J

    2008-01-01

    Axial Flux Permanent Magnet (AFPM) brushless machines are modern electrical machines with a lot of advantages over their conventional counterparts. They are being increasingly used in consumer electronics, public life, instrumentation and automation system, clinical engineering, industrial electromechanical drives, automobile manufacturing industry, electric and hybrid electric vehicles, marine vessels and toys. They are also used in more electric aircrafts and many other applications on larger scale. New applications have also emerged in distributed generation systems (wind turbine generators

  14. Findings of the 2011 workshop on statistical machine translation

    NARCIS (Netherlands)

    Callison-Burch, C.; Koehn, P.; Monz, C.; Zaidan, O.F.

    2011-01-01

    This paper presents the results of the WMT11 shared tasks, which included a translation task, a system combination task, and a task for machine translation evaluation metrics. We conducted a large-scale manual evaluation of 148 machine translation systems and 41 system combination entries. We used

  15. Multichannel noninvasive human-machine interface via stretchable µm thick sEMG patches for robot manipulation

    Science.gov (United States)

    Zhou, Ying; Wang, Youhua; Liu, Runfeng; Xiao, Lin; Zhang, Qin; Huang, YongAn

    2018-01-01

    Epidermal electronics (e-skin) emerging in recent years offer the opportunity to noninvasively and wearably extract biosignals from human bodies. The conventional processes of e-skin based on standard microelectronic fabrication processes and a variety of transfer printing methods, nevertheless, unquestionably constrains the size of the devices, posing a serious challenge to collecting signals via skin, the largest organ in the human body. Herein we propose a multichannel noninvasive human-machine interface (HMI) using stretchable surface electromyography (sEMG) patches to realize a robot hand mimicking human gestures. Time-efficient processes are first developed to manufacture µm thick large-scale stretchable devices. With micron thickness, the stretchable µm thick sEMG patches show excellent conformability with human skin and consequently comparable electrical performance with conventional gel electrodes. Combined with the large-scale size, the multichannel noninvasive HMI via stretchable µm thick sEMG patches successfully manipulates the robot hand with eight different gestures, whose precision is as high as conventional gel electrodes array.

  16. Size-selective pulmonary dose indices for metal-working fluid aerosols in machining and grinding operations in the automobile manufacturing industry.

    Science.gov (United States)

    Woskie, S R; Smith, T J; Hallock, M F; Hammond, S K; Rosenthal, F; Eisen, E A; Kriebel, D; Greaves, I A

    1994-01-01

    The current metal-working fluid exposures at three locations that manufacture automotive parts were assessed in conjunction with epidemiological studies of the mortality and respiratory morbidity experiences of workers at these plants. A rationale is presented for selecting and characterizing epidemiologic exposure groups in this environment. More than 475 full-shift personal aerosol samples were taken using a two-stage personal cascade impactor with median size cut-offs of 9.8 microns and 3.5 microns, plus a backup filter. For a sample of 403 workers exposed to aerosols of machining or grinding fluids, the mean total exposure was 706 micrograms/m3 (standard error (SE) = 21 micrograms/m3). Among 72 assemblers unexposed to machining fluids, the mean total exposure was 187 +/- 10 (SE) micrograms/m3. An analysis of variance model identified factors significantly associated with exposure level and permitted estimates of exposure for workers in the unsampled machine type/metal-working fluid groups. Comparison of the results obtained from personal impactor samples with predictions from an aerosol-deposition model for the human respiratory tract showed high correlation. However, the amount collected on the impactor stage underestimates extrathoracic deposition and overestimates tracheobronchial and alveolar deposition, as calculated by the deposition model. When both the impactor concentration and the deposition-model concentration were used to estimate cumulative thoracic concentrations for the worklives of a subset of auto workers, there was no significant difference in the rank order of the subjects' cumulative concentration. However, the cumulative impactor concentration values were significantly higher than the cumulative deposition-model concentration values for the subjects.

  17. Machine Learning Classification of Buildings for Map Generalization

    Directory of Open Access Journals (Sweden)

    Jaeeun Lee

    2017-10-01

    Full Text Available A critical problem in mapping data is the frequent updating of large data sets. To solve this problem, the updating of small-scale data based on large-scale data is very effective. Various map generalization techniques, such as simplification, displacement, typification, elimination, and aggregation, must therefore be applied. In this study, we focused on the elimination and aggregation of the building layer, for which each building in a large scale was classified as “0-eliminated,” “1-retained,” or “2-aggregated.” Machine-learning classification algorithms were then used for classifying the buildings. The data of 1:1000 scale and 1:25,000 scale digital maps obtained from the National Geographic Information Institute were used. We applied to these data various machine-learning classification algorithms, including naive Bayes (NB, decision tree (DT, k-nearest neighbor (k-NN, and support vector machine (SVM. The overall accuracies of each algorithm were satisfactory: DT, 88.96%; k-NN, 88.27%; SVM, 87.57%; and NB, 79.50%. Although elimination is a direct part of the proposed process, generalization operations, such as simplification and aggregation of polygons, must still be performed for buildings classified as retained and aggregated. Thus, these algorithms can be used for building classification and can serve as preparatory steps for building generalization.

  18. Stochastic subset selection for learning with kernel machines.

    Science.gov (United States)

    Rhinelander, Jason; Liu, Xiaoping P

    2012-06-01

    Kernel machines have gained much popularity in applications of machine learning. Support vector machines (SVMs) are a subset of kernel machines and generalize well for classification, regression, and anomaly detection tasks. The training procedure for traditional SVMs involves solving a quadratic programming (QP) problem. The QP problem scales super linearly in computational effort with the number of training samples and is often used for the offline batch processing of data. Kernel machines operate by retaining a subset of observed data during training. The data vectors contained within this subset are referred to as support vectors (SVs). The work presented in this paper introduces a subset selection method for the use of kernel machines in online, changing environments. Our algorithm works by using a stochastic indexing technique when selecting a subset of SVs when computing the kernel expansion. The work described here is novel because it separates the selection of kernel basis functions from the training algorithm used. The subset selection algorithm presented here can be used in conjunction with any online training technique. It is important for online kernel machines to be computationally efficient due to the real-time requirements of online environments. Our algorithm is an important contribution because it scales linearly with the number of training samples and is compatible with current training techniques. Our algorithm outperforms standard techniques in terms of computational efficiency and provides increased recognition accuracy in our experiments. We provide results from experiments using both simulated and real-world data sets to verify our algorithm.

  19. A new scaling for divertor detachment

    Science.gov (United States)

    Goldston, R. J.; Reinke, M. L.; Schwartz, J. A.

    2017-05-01

    The ITER design, and future reactor designs, depend on divertor ‘detachment,’ whether partial, pronounced or complete, to limit heat flux to plasma-facing components and to limit surface erosion due to sputtering. It would be valuable to have a measure of the difficulty of achieving detachment as a function of machine parameters, such as input power, magnetic field, major radius, etc. Frequently the parallel heat flux, estimated typically as proportional to P sep/R or P sep B/R, is used as a proxy for this difficulty. Here we argue that impurity cooling is dependent on the upstream density, which itself must be limited by a Greenwald-like scaling. Taking this into account self-consistently, we find the impurity fraction required for detachment scales dominantly as power divided by poloidal magnetic field. The absence of any explicit scaling with machine size is concerning, as P sep surely must increase greatly for an economic fusion system, while increases in the poloidal field strength are limited by coil technology and plasma physics. This result should be challenged by comparison with 2D divertor codes and with measurements on existing experiments. Nonetheless, it suggests that higher magnetic field, stronger shaping, double-null operation, ‘advanced’ divertor configurations, as well as alternate means to handle heat flux such as metallic liquid and/or vapor targets merit greater attention.

  20. Chem-Prep PZT 95/5 for Neutron Generator Applications: Particle Size Distribution Comparison of Development and Production-Scale Powders

    International Nuclear Information System (INIS)

    SIPOLA, DIANA L.; VOIGT, JAMES A.; LOCKWOOD, STEVEN J.; RODMAN-GONZALES, EMILY D.

    2002-01-01

    The Materials Chemistry Department 1846 has developed a lab-scale chem-prep process for the synthesis of PNZT 95/5, a ferroelectric material that is used in neutron generator power supplies. This process (Sandia Process, or SP) has been successfully transferred to and scaled by Department 14192 (Ceramics and Glass Department), (Transferred Sandia Process, or TSP), to meet the future supply needs of Sandia for its neutron generator production responsibilities. In going from the development-size SP batch (1.6 kg/batch) to the production-scale TSP powder batch size (10 kg/batch), it was important that it be determined if the scaling process caused any ''performance-critical'' changes in the PNZT 95/5 being produced. One area where a difference was found was in the particle size distributions of the calcined PNZT powders. Documented in this SAND report are the results of an experimental study to determine the origin of the differences in the particle size distribution of the SP and TSP powders

  1. Classification of Suicide Attempts through a Machine Learning Algorithm Based on Multiple Systemic Psychiatric Scales

    Directory of Open Access Journals (Sweden)

    Jihoon Oh

    2017-09-01

    Full Text Available Classification and prediction of suicide attempts in high-risk groups is important for preventing suicide. The purpose of this study was to investigate whether the information from multiple clinical scales has classification power for identifying actual suicide attempts. Patients with depression and anxiety disorders (N = 573 were included, and each participant completed 31 self-report psychiatric scales and questionnaires about their history of suicide attempts. We then trained an artificial neural network classifier with 41 variables (31 psychiatric scales and 10 sociodemographic elements and ranked the contribution of each variable for the classification of suicide attempts. To evaluate the clinical applicability of our model, we measured classification performance with top-ranked predictors. Our model had an overall accuracy of 93.7% in 1-month, 90.8% in 1-year, and 87.4% in lifetime suicide attempts detection. The area under the receiver operating characteristic curve (AUROC was the highest for 1-month suicide attempts detection (0.93, followed by lifetime (0.89, and 1-year detection (0.87. Among all variables, the Emotion Regulation Questionnaire had the highest contribution, and the positive and negative characteristics of the scales similarly contributed to classification performance. Performance on suicide attempts classification was largely maintained when we only used the top five ranked variables for training (AUROC; 1-month, 0.75, 1-year, 0.85, lifetime suicide attempts detection, 0.87. Our findings indicate that information from self-report clinical scales can be useful for the classification of suicide attempts. Based on the reliable performance of the top five predictors alone, this machine learning approach could help clinicians identify high-risk patients in clinical settings.

  2. Classification of Suicide Attempts through a Machine Learning Algorithm Based on Multiple Systemic Psychiatric Scales.

    Science.gov (United States)

    Oh, Jihoon; Yun, Kyongsik; Hwang, Ji-Hyun; Chae, Jeong-Ho

    2017-01-01

    Classification and prediction of suicide attempts in high-risk groups is important for preventing suicide. The purpose of this study was to investigate whether the information from multiple clinical scales has classification power for identifying actual suicide attempts. Patients with depression and anxiety disorders ( N  = 573) were included, and each participant completed 31 self-report psychiatric scales and questionnaires about their history of suicide attempts. We then trained an artificial neural network classifier with 41 variables (31 psychiatric scales and 10 sociodemographic elements) and ranked the contribution of each variable for the classification of suicide attempts. To evaluate the clinical applicability of our model, we measured classification performance with top-ranked predictors. Our model had an overall accuracy of 93.7% in 1-month, 90.8% in 1-year, and 87.4% in lifetime suicide attempts detection. The area under the receiver operating characteristic curve (AUROC) was the highest for 1-month suicide attempts detection (0.93), followed by lifetime (0.89), and 1-year detection (0.87). Among all variables, the Emotion Regulation Questionnaire had the highest contribution, and the positive and negative characteristics of the scales similarly contributed to classification performance. Performance on suicide attempts classification was largely maintained when we only used the top five ranked variables for training (AUROC; 1-month, 0.75, 1-year, 0.85, lifetime suicide attempts detection, 0.87). Our findings indicate that information from self-report clinical scales can be useful for the classification of suicide attempts. Based on the reliable performance of the top five predictors alone, this machine learning approach could help clinicians identify high-risk patients in clinical settings.

  3. Fractal and multifractal approaches for the analysis of crack-size dependent scaling laws in fatigue

    Energy Technology Data Exchange (ETDEWEB)

    Paggi, Marco [Politecnico di Torino, Department of Structural Engineering and Geotechnics, Corso Duca degli Abruzzi 24, 10129 Torino (Italy)], E-mail: marco.paggi@polito.it; Carpinteri, Alberto [Politecnico di Torino, Department of Structural Engineering and Geotechnics, Corso Duca degli Abruzzi 24, 10129 Torino (Italy)

    2009-05-15

    The enhanced ability to detect and measure very short cracks, along with a great interest in applying fracture mechanics formulae to smaller and smaller crack sizes, has pointed out the so-called anomalous behavior of short cracks with respect to their longer counterparts. The crack-size dependencies of both the fatigue threshold and the Paris' constant C are only two notable examples of these anomalous scaling laws. In this framework, a unified theoretical model seems to be missing and the behavior of short cracks can still be considered as an open problem. In this paper, we propose a critical reexamination of the fractal models for the analysis of crack-size effects in fatigue. The limitations of each model are put into evidence and removed. At the end, a new generalized theory based on fractal geometry is proposed, which permits to consistently interpret the short crack-related anomalous scaling laws within a unified theoretical formulation. Finally, this approach is herein used to interpret relevant experimental data related to the crack-size dependence of the fatigue threshold in metals.

  4. Fractal and multifractal approaches for the analysis of crack-size dependent scaling laws in fatigue

    International Nuclear Information System (INIS)

    Paggi, Marco; Carpinteri, Alberto

    2009-01-01

    The enhanced ability to detect and measure very short cracks, along with a great interest in applying fracture mechanics formulae to smaller and smaller crack sizes, has pointed out the so-called anomalous behavior of short cracks with respect to their longer counterparts. The crack-size dependencies of both the fatigue threshold and the Paris' constant C are only two notable examples of these anomalous scaling laws. In this framework, a unified theoretical model seems to be missing and the behavior of short cracks can still be considered as an open problem. In this paper, we propose a critical reexamination of the fractal models for the analysis of crack-size effects in fatigue. The limitations of each model are put into evidence and removed. At the end, a new generalized theory based on fractal geometry is proposed, which permits to consistently interpret the short crack-related anomalous scaling laws within a unified theoretical formulation. Finally, this approach is herein used to interpret relevant experimental data related to the crack-size dependence of the fatigue threshold in metals.

  5. Towards large-scale FAME-based bacterial species identification using machine learning techniques.

    Science.gov (United States)

    Slabbinck, Bram; De Baets, Bernard; Dawyndt, Peter; De Vos, Paul

    2009-05-01

    In the last decade, bacterial taxonomy witnessed a huge expansion. The swift pace of bacterial species (re-)definitions has a serious impact on the accuracy and completeness of first-line identification methods. Consequently, back-end identification libraries need to be synchronized with the List of Prokaryotic names with Standing in Nomenclature. In this study, we focus on bacterial fatty acid methyl ester (FAME) profiling as a broadly used first-line identification method. From the BAME@LMG database, we have selected FAME profiles of individual strains belonging to the genera Bacillus, Paenibacillus and Pseudomonas. Only those profiles resulting from standard growth conditions have been retained. The corresponding data set covers 74, 44 and 95 validly published bacterial species, respectively, represented by 961, 378 and 1673 standard FAME profiles. Through the application of machine learning techniques in a supervised strategy, different computational models have been built for genus and species identification. Three techniques have been considered: artificial neural networks, random forests and support vector machines. Nearly perfect identification has been achieved at genus level. Notwithstanding the known limited discriminative power of FAME analysis for species identification, the computational models have resulted in good species identification results for the three genera. For Bacillus, Paenibacillus and Pseudomonas, random forests have resulted in sensitivity values, respectively, 0.847, 0.901 and 0.708. The random forests models outperform those of the other machine learning techniques. Moreover, our machine learning approach also outperformed the Sherlock MIS (MIDI Inc., Newark, DE, USA). These results show that machine learning proves very useful for FAME-based bacterial species identification. Besides good bacterial identification at species level, speed and ease of taxonomic synchronization are major advantages of this computational species

  6. Analysis of machining and machine tools

    CERN Document Server

    Liang, Steven Y

    2016-01-01

    This book delivers the fundamental science and mechanics of machining and machine tools by presenting systematic and quantitative knowledge in the form of process mechanics and physics. It gives readers a solid command of machining science and engineering, and familiarizes them with the geometry and functionality requirements of creating parts and components in today’s markets. The authors address traditional machining topics, such as: single and multiple point cutting processes grinding components accuracy and metrology shear stress in cutting cutting temperature and analysis chatter They also address non-traditional machining, such as: electrical discharge machining electrochemical machining laser and electron beam machining A chapter on biomedical machining is also included. This book is appropriate for advanced undergraduate and graduate mechani cal engineering students, manufacturing engineers, and researchers. Each chapter contains examples, exercises and their solutions, and homework problems that re...

  7. Electrochemical machining of internal built-up surfaces of large-sized vessels for nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Ryabchenko, N N; Pulin, V Ya [Vsesoyuznyj Proektno-Tekhnologicheskij Inst. Atomnogo Mashinostroeniya i Kotlostroeniya, Rostov-na-Donu (USSR)

    1977-01-01

    Electrochemical machining (ECM) has been employed for finishing of mechanically processed inner surfaces of large lateral parts of construction bodies with welded 0Kh18N10T steel overlayer. The finishing technology developed reduces the surface roughness from 10 mcm to the standard 2.5 mcm at the efficiency of machining of 2-4 m/sup 2/ per hour.

  8. Insulin/IGF-regulated size scaling of neuroendocrine cells expressing the bHLH transcription factor Dimmed in Drosophila.

    Directory of Open Access Journals (Sweden)

    Jiangnan Luo

    Full Text Available Neurons and other cells display a large variation in size in an organism. Thus, a fundamental question is how growth of individual cells and their organelles is regulated. Is size scaling of individual neurons regulated post-mitotically, independent of growth of the entire CNS? Although the role of insulin/IGF-signaling (IIS in growth of tissues and whole organisms is well established, it is not known whether it regulates the size of individual neurons. We therefore studied the role of IIS in the size scaling of neurons in the Drosophila CNS. By targeted genetic manipulations of insulin receptor (dInR expression in a variety of neuron types we demonstrate that the cell size is affected only in neuroendocrine cells specified by the bHLH transcription factor DIMMED (DIMM. Several populations of DIMM-positive neurons tested displayed enlarged cell bodies after overexpression of the dInR, as well as PI3 kinase and Akt1 (protein kinase B, whereas DIMM-negative neurons did not respond to dInR manipulations. Knockdown of these components produce the opposite phenotype. Increased growth can also be induced by targeted overexpression of nutrient-dependent TOR (target of rapamycin signaling components, such as Rheb (small GTPase, TOR and S6K (S6 kinase. After Dimm-knockdown in neuroendocrine cells manipulations of dInR expression have significantly less effects on cell size. We also show that dInR expression in neuroendocrine cells can be altered by up or down-regulation of Dimm. This novel dInR-regulated size scaling is seen during postembryonic development, continues in the aging adult and is diet dependent. The increase in cell size includes cell body, axon terminations, nucleus and Golgi apparatus. We suggest that the dInR-mediated scaling of neuroendocrine cells is part of a plasticity that adapts the secretory capacity to changing physiological conditions and nutrient-dependent organismal growth.

  9. Experimental research of kinetic and dynamic characteristics of temperature movements of machines

    Science.gov (United States)

    Parfenov, I. V.; Polyakov, A. N.

    2018-03-01

    Nowadays, the urgency of informational support of machines at different stages of their life cycle is increasing in the form of various experimental characteristics that determine the criteria for working capacity. The effectiveness of forming the base of experimental characteristics of machines is related directly to the duration of their field tests. In this research, the authors consider a new technique that allows reducing the duration of full-scale testing of machines by 30%. To this end, three new indicator coefficients were calculated in real time to determine the moments corresponding to the characteristic points. In the work, new terms for thermal characteristics of machine tools are introduced: kinetic and dynamic characteristics of the temperature movements of the machine. This allow taking into account not only the experimental values for the temperature displacements of the elements of the carrier system of the machine, but also their derivatives up to the third order, inclusively. The work is based on experimental data obtained in the course of full-scale thermal tests of a drilling-milling and boring CNC machine.

  10. Learning Machines Implemented on Non-Deterministic Hardware

    OpenAIRE

    Gupta, Suyog; Sindhwani, Vikas; Gopalakrishnan, Kailash

    2014-01-01

    This paper highlights new opportunities for designing large-scale machine learning systems as a consequence of blurring traditional boundaries that have allowed algorithm designers and application-level practitioners to stay -- for the most part -- oblivious to the details of the underlying hardware-level implementations. The hardware/software co-design methodology advocated here hinges on the deployment of compute-intensive machine learning kernels onto compute platforms that trade-off deter...

  11. Gear fault diagnosis under variable conditions with intrinsic time-scale decomposition-singular value decomposition and support vector machine

    Energy Technology Data Exchange (ETDEWEB)

    Xing, Zhanqiang; Qu, Jianfeng; Chai, Yi; Tang, Qiu; Zhou, Yuming [Chongqing University, Chongqing (China)

    2017-02-15

    The gear vibration signal is nonlinear and non-stationary, gear fault diagnosis under variable conditions has always been unsatisfactory. To solve this problem, an intelligent fault diagnosis method based on Intrinsic time-scale decomposition (ITD)-Singular value decomposition (SVD) and Support vector machine (SVM) is proposed in this paper. The ITD method is adopted to decompose the vibration signal of gearbox into several Proper rotation components (PRCs). Subsequently, the singular value decomposition is proposed to obtain the singular value vectors of the proper rotation components and improve the robustness of feature extraction under variable conditions. Finally, the Support vector machine is applied to classify the fault type of gear. According to the experimental results, the performance of ITD-SVD exceeds those of the time-frequency analysis methods with EMD and WPT combined with SVD for feature extraction, and the classifier of SVM outperforms those for K-nearest neighbors (K-NN) and Back propagation (BP). Moreover, the proposed approach can accurately diagnose and identify different fault types of gear under variable conditions.

  12. Numerical Simulations of Two-Phase Flow in a Self-Aerated Flotation Machine and Kinetics Modeling

    KAUST Repository

    Fayed, Hassan E.; Ragab, Saad

    2015-01-01

    A new boundary condition treatment has been devised for two-phase flow numerical simulations in a self-aerated minerals flotation machine and applied to a Wemco 0.8 m3 pilot cell. Airflow rate is not specified a priori but is predicted by the simulations as well as power consumption. Time-dependent simulations of two-phase flow in flotation machines are essential to understanding flow behavior and physics in self-aerated machines such as the Wemco machines. In this paper, simulations have been conducted for three different uniform bubble sizes (db = 0.5, 0.7 and 1.0 mm) to study the effects of bubble size on air holdup and hydrodynamics in Wemco pilot cells. Moreover, a computational fluid dynamics (CFD)-based flotation model has been developed to predict the pulp recovery rate of minerals from a flotation cell for different bubble sizes, different particle sizes and particle size distribution. The model uses a first-order rate equation, where models for probabilities of collision, adhesion and stabilization and collisions frequency estimated by Zaitchik-2010 model are used for the calculation of rate constant. Spatial distributions of dissipation rate and air volume fraction (also called void fraction) determined by the two-phase simulations are the input for the flotation kinetics model. The average pulp recovery rate has been calculated locally for different uniform bubble and particle diameters. The CFD-based flotation kinetics model is also used to predict pulp recovery rate in the presence of particle size distribution. Particle number density pdf and the data generated for single particle size are used to compute the recovery rate for a specific mean particle diameter. Our computational model gives a figure of merit for the recovery rate of a flotation machine, and as such can be used to assess incremental design improvements as well as design of new machines.

  13. Numerical Simulations of Two-Phase Flow in a Self-Aerated Flotation Machine and Kinetics Modeling

    Directory of Open Access Journals (Sweden)

    Hassan Fayed

    2015-03-01

    Full Text Available A new boundary condition treatment has been devised for two-phase flow numerical simulations in a self-aerated minerals flotation machine and applied to a Wemco 0.8 m3 pilot cell. Airflow rate is not specified a priori but is predicted by the simulations as well as power consumption. Time-dependent simulations of two-phase flow in flotation machines are essential to understanding flow behavior and physics in self-aerated machines such as the Wemco machines. In this paper, simulations have been conducted for three different uniform bubble sizes (db = 0.5, 0.7 and 1.0 mm to study the effects of bubble size on air holdup and hydrodynamics in Wemco pilot cells. Moreover, a computational fluid dynamics (CFD-based flotation model has been developed to predict the pulp recovery rate of minerals from a flotation cell for different bubble sizes, different particle sizes and particle size distribution. The model uses a first-order rate equation, where models for probabilities of collision, adhesion and stabilization and collisions frequency estimated by Zaitchik-2010 model are used for the calculation of rate constant. Spatial distributions of dissipation rate and air volume fraction (also called void fraction determined by the two-phase simulations are the input for the flotation kinetics model. The average pulp recovery rate has been calculated locally for different uniform bubble and particle diameters. The CFD-based flotation kinetics model is also used to predict pulp recovery rate in the presence of particle size distribution. Particle number density pdf and the data generated for single particle size are used to compute the recovery rate for a specific mean particle diameter. Our computational model gives a figure of merit for the recovery rate of a flotation machine, and as such can be used to assess incremental design improvements as well as design of new machines.

  14. Numerical Simulations of Two-Phase Flow in a Self-Aerated Flotation Machine and Kinetics Modeling

    KAUST Repository

    Fayed, Hassan E.

    2015-03-30

    A new boundary condition treatment has been devised for two-phase flow numerical simulations in a self-aerated minerals flotation machine and applied to a Wemco 0.8 m3 pilot cell. Airflow rate is not specified a priori but is predicted by the simulations as well as power consumption. Time-dependent simulations of two-phase flow in flotation machines are essential to understanding flow behavior and physics in self-aerated machines such as the Wemco machines. In this paper, simulations have been conducted for three different uniform bubble sizes (db = 0.5, 0.7 and 1.0 mm) to study the effects of bubble size on air holdup and hydrodynamics in Wemco pilot cells. Moreover, a computational fluid dynamics (CFD)-based flotation model has been developed to predict the pulp recovery rate of minerals from a flotation cell for different bubble sizes, different particle sizes and particle size distribution. The model uses a first-order rate equation, where models for probabilities of collision, adhesion and stabilization and collisions frequency estimated by Zaitchik-2010 model are used for the calculation of rate constant. Spatial distributions of dissipation rate and air volume fraction (also called void fraction) determined by the two-phase simulations are the input for the flotation kinetics model. The average pulp recovery rate has been calculated locally for different uniform bubble and particle diameters. The CFD-based flotation kinetics model is also used to predict pulp recovery rate in the presence of particle size distribution. Particle number density pdf and the data generated for single particle size are used to compute the recovery rate for a specific mean particle diameter. Our computational model gives a figure of merit for the recovery rate of a flotation machine, and as such can be used to assess incremental design improvements as well as design of new machines.

  15. Economies of scale and firm size optimum in rural water supply

    Science.gov (United States)

    Sauer, Johannes

    2005-11-01

    This article is focused on modeling and analyzing the cost structure of water-supplying companies. A cross-sectional data set was collected with respect to water firms in rural areas of former East and West Germany. The empirical data are analyzed by applying a symmetric generalized McFadden (SGM) functional form. This flexible functional form allows for testing the concavity required by microeconomic theory as well as the global imposition of such curvature restrictions without any loss of flexibility. The original specification of the SGM cost function is modified to incorporate fixed factors of water production and supply as, for example, groundwater intake or the number of connections supplied. The estimated flexible and global curvature correct cost function is then used to derive scale elasticities as well as the optimal firm size. The results show that no water supplier in the sample produces at constant returns to scale. The optimal firm size was found to be on average about three times larger than the existing one. These findings deliver evidence for the hypothesis that the legally set supplying areas, oriented at public administrative criteria as well as local characteristics of water resources, are economically inefficient. Hence structural inefficiency in the rural water sector is confirmed to be policy induced.

  16. MACHINE LEARNING TECHNIQUES USED IN BIG DATA

    Directory of Open Access Journals (Sweden)

    STEFANIA LOREDANA NITA

    2016-07-01

    Full Text Available The classical tools used in data analysis are not enough in order to benefit of all advantages of big data. The amount of information is too large for a complete investigation, and the possible connections and relations between data could be missed, because it is difficult or even impossible to verify all assumption over the information. Machine learning is a great solution in order to find concealed correlations or relationships between data, because it runs at scale machine and works very well with large data sets. The more data we have, the more the machine learning algorithm is useful, because it “learns” from the existing data and applies the found rules on new entries. In this paper, we present some machine learning algorithms and techniques used in big data.

  17. Effects of pole flux distribution in a homopolar linear synchronous machine

    Science.gov (United States)

    Balchin, M. J.; Eastham, J. F.; Coles, P. C.

    1994-05-01

    Linear forms of synchronous electrical machine are at present being considered as the propulsion means in high-speed, magnetically levitated (Maglev) ground transportation systems. A homopolar form of machine is considered in which the primary member, which carries both ac and dc windings, is supported on the vehicle. Test results and theoretical predictions are presented for a design of machine intended for driving a 100 passenger vehicle at a top speed of 400 km/h. The layout of the dc magnetic circuit is examined to locate the best position for the dc winding from the point of view of minimum core weight. Measurements of flux build-up under the machine at different operating speeds are given for two types of secondary pole: solid and laminated. The solid pole results, which are confirmed theoretically, show that this form of construction is impractical for high-speed drives. Measured motoring characteristics are presented for a short length of machine which simulates conditions at the leading and trailing ends of the full-sized machine. Combination of the results with those from a cylindrical version of the machine make it possible to infer the performance of the full-sized traction machine. This gives 0.8 pf and 0.9 efficiency at 300 km/h, which is much better than the reported performance of a comparable linear induction motor (0.52 pf and 0.82 efficiency). It is therefore concluded that in any projected high-speed Maglev systems, a linear synchronous machine should be the first choice as the propulsion means.

  18. Conceptual design of current lead for large scale high temperature superconducting rotating machine

    International Nuclear Information System (INIS)

    Le, T. D.; Kim, J. H.; Park, S. I.; Kim, H. M.

    2014-01-01

    High-temperature superconducting (HTS) rotating machines always require an electric current of from several hundreds to several thousand amperes to be led from outside into cold region of the field coil. Heat losses through the current leads then assume tremendous importance. Consequently, it is necessary to acquire optimal design for the leads which would achieve minimum heat loss during operation of machines for a given electrical current. In this paper, conduction cooled current lead type of 10 MW-Class HTS rotating machine will be chosen, a conceptual design will be discussed and performed relied on the least heat lost estimation between conventional metal lead and partially HTS lead. In addition, steady-state thermal characteristic of each one also is considered and illustrated.

  19. Remote Machining and Evaluation of Explosively Filled Munitions

    Data.gov (United States)

    Federal Laboratory Consortium — This facility is used for remote machining of explosively loaded ammunition. Munition sizes from small arms through 8-inch artillery can be accommodated. Sectioning,...

  20. TF.Learn: TensorFlow's High-level Module for Distributed Machine Learning

    OpenAIRE

    Tang, Yuan

    2016-01-01

    TF.Learn is a high-level Python module for distributed machine learning inside TensorFlow. It provides an easy-to-use Scikit-learn style interface to simplify the process of creating, configuring, training, evaluating, and experimenting a machine learning model. TF.Learn integrates a wide range of state-of-art machine learning algorithms built on top of TensorFlow's low level APIs for small to large-scale supervised and unsupervised problems. This module focuses on bringing machine learning t...

  1. INCREASING RETURNS TO SCALE, DYNAMICS OF INDUSTRIAL STRUCTURE AND SIZE DISTRIBUTION OF FIRMS

    Institute of Scientific and Technical Information of China (English)

    Ying FAN; Menghui LI; Zengru DI

    2006-01-01

    A multi-agent model is presented to discuss the market dynamics and the size distribution of firms.The model emphasizes the effects of increasing returns to scale and gives the description of the born and death of adaptive producers. The evolution of market structure and its behavior under the technological shocks are investigated. Its dynamical results are in good agreement with some empirical "stylized facts" of industrial evolution. With the diversity of demand and adaptive growth strategies of firms, the firm size in the generalized model obeys the power-law distribution. Three factors mainly determine the competitive dynamics and the skewed size distributions of firms: 1. Self-reinforcing mechanism; 2. Adaptive firm growing strategies; 3. Demand diversity or widespread heterogeneity in the technological capabilities of firms.

  2. Economies of scale and trends in the size of southern forest industries

    Science.gov (United States)

    James E. Granskog

    1978-01-01

    In each of the major southern forest industries, the trend has been toward achieving economies of scale, that is, to build larger production units to reduce unit costs. Current minimum efficient plant size estimated by survivor analysis is 1,000 tons per day capacity for sulfate pulping, 100 million square feet (3/8- inch basis) annual capacity for softwood plywood,...

  3. Finite size scaling analysis on Nagel-Schreckenberg model for traffic flow

    Science.gov (United States)

    Balouchi, Ashkan; Browne, Dana

    2015-03-01

    The traffic flow problem as a many-particle non-equilibrium system has caught the interest of physicists for decades. Understanding the traffic flow properties and though obtaining the ability to control the transition from the free-flow phase to the jammed phase plays a critical role in the future world of urging self-driven cars technology. We have studied phase transitions in one-lane traffic flow through the mean velocity, distributions of car spacing, dynamic susceptibility and jam persistence -as candidates for an order parameter- using the Nagel-Schreckenberg model to simulate traffic flow. The length dependent transition has been observed for a range of maximum velocities greater than a certain value. Finite size scaling analysis indicates power-law scaling of these quantities at the onset of the jammed phase.

  4. Experience in use of optical theodolite for machine construction

    Science.gov (United States)

    Shereshevskiy, L. M.

    1984-02-01

    An optical theodolite, an instrument of small size and weight featuring a high-precision horizontal dial, was successfully used in production of forging and pressing equipment at the Voronezh plant. Such a TV-1 theodolite, together with a contact-type indicating device and a mechanism for centering the machined part, is included in a turret goniometer for angular alignment and control of cutting operations. Its micrometer has 1 inch scale divisions, the instrument is designed to give readings with a high degree of stability and reproducibility with the standard deviation of one measurement not exceeding 5 inches. It is particularly useful in production of parts with variable spacing and cross section of grooves or slots, including curvilinear ones. With a universal adapter plate on which guide prisms and an interchangeable gauge pin are mounted, this theodolite can also be used in production of large bevel gears: the same instrument for a wide range of gear sizes, diametral pitches, and tooth profiles. Using the maximum of standard components, this theodolite can be easily assembled at any manufacturing plant.

  5. A Review on Parametric Analysis of Magnetic Abrasive Machining Process

    Science.gov (United States)

    Khattri, Krishna; Choudhary, Gulshan; Bhuyan, B. K.; Selokar, Ashish

    2018-03-01

    The magnetic abrasive machining (MAM) process is a highly developed unconventional machining process. It is frequently used in manufacturing industries for nanometer range surface finishing of workpiece with the help of Magnetic abrasive particles (MAPs) and magnetic force applied in the machining zone. It is precise and faster than conventional methods and able to produce defect free finished components. This paper provides a comprehensive review on the recent advancement of MAM process carried out by different researcher till date. The effect of different input parameters such as rotational speed of electromagnet, voltage, magnetic flux density, abrasive particles size and working gap on the performances of Material Removal Rate (MRR) and surface roughness (Ra) have been discussed. On the basis of review, it is observed that the rotational speed of electromagnet, voltage and mesh size of abrasive particles have significant impact on MAM process.

  6. Brittle fracture in structural steels: perspectives at different size-scales.

    Science.gov (United States)

    Knott, John

    2015-03-28

    This paper describes characteristics of transgranular cleavage fracture in structural steel, viewed at different size-scales. Initially, consideration is given to structures and the service duty to which they are exposed at the macroscale, highlighting failure by plastic collapse and failure by brittle fracture. This is followed by sections describing the use of fracture mechanics and materials testing in carrying-out assessments of structural integrity. Attention then focuses on the microscale, explaining how values of the local fracture stress in notched bars or of fracture toughness in pre-cracked test-pieces are related to features of the microstructure: carbide thicknesses in wrought material; the sizes of oxide/silicate inclusions in weld metals. Effects of a microstructure that is 'heterogeneous' at the mesoscale are treated briefly, with respect to the extraction of test-pieces from thick sections and to extrapolations of data to low failure probabilities. The values of local fracture stress may be used to infer a local 'work-of-fracture' that is found experimentally to be a few times greater than that of two free surfaces. Reasons for this are discussed in the conclusion section on nano-scale events. It is suggested that, ahead of a sharp crack, it is necessary to increase the compliance by a cooperative movement of atoms (involving extra work) to allow the crack-tip bond to displace sufficiently for the energy of attraction between the atoms to reduce to zero. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  7. Rock sampling. [method for controlling particle size distribution

    Science.gov (United States)

    Blum, P. (Inventor)

    1971-01-01

    A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.

  8. Modelling open pit shovel-truck systems using the Machine Repair Model

    Energy Technology Data Exchange (ETDEWEB)

    Krause, A.; Musingwini, C. [CBH Resources Ltd., Sydney, NSW (Australia). Endeaver Mine

    2007-08-15

    Shovel-truck systems for loading and hauling material in open pit mines are now routinely analysed using simulation models or off-the-shelf simulation software packages, which can be very expensive for once-off or occasional use. The simulation models invariably produce different estimations of fleet sizes due to their differing estimations of cycle time. No single model or package can accurately estimate the required fleet size because the fleet operating parameters are characteristically random and dynamic. In order to improve confidence in sizing the fleet for a mining project, at least two estimation models should be used. This paper demonstrates that the Machine Repair Model can be modified and used as a model for estimating truck fleet size in an open pit shovel-truck system. The modified Machine Repair Model is first applied to a virtual open pit mine case study. The results compare favourably to output from other estimation models using the same input parameters for the virtual mine. The modified Machine Repair Model is further applied to an existing open pit coal operation, the Kwagga Section of Optimum Colliery as a case study. Again the results confirm those obtained from the virtual mine case study. It is concluded that the Machine Repair Model can be an affordable model compared to off-the-shelf generic software because it is easily modelled in Microsoft Excel, a software platform that most mines already use.

  9. Gaussian processes for machine learning.

    Science.gov (United States)

    Seeger, Matthias

    2004-04-01

    Gaussian processes (GPs) are natural generalisations of multivariate Gaussian random variables to infinite (countably or continuous) index sets. GPs have been applied in a large number of fields to a diverse range of ends, and very many deep theoretical analyses of various properties are available. This paper gives an introduction to Gaussian processes on a fairly elementary level with special emphasis on characteristics relevant in machine learning. It draws explicit connections to branches such as spline smoothing models and support vector machines in which similar ideas have been investigated. Gaussian process models are routinely used to solve hard machine learning problems. They are attractive because of their flexible non-parametric nature and computational simplicity. Treated within a Bayesian framework, very powerful statistical methods can be implemented which offer valid estimates of uncertainties in our predictions and generic model selection procedures cast as nonlinear optimization problems. Their main drawback of heavy computational scaling has recently been alleviated by the introduction of generic sparse approximations.13,78,31 The mathematical literature on GPs is large and often uses deep concepts which are not required to fully understand most machine learning applications. In this tutorial paper, we aim to present characteristics of GPs relevant to machine learning and to show up precise connections to other "kernel machines" popular in the community. Our focus is on a simple presentation, but references to more detailed sources are provided.

  10. The Impact Of Surface Shape Of Chip-Breaker On Machined Surface

    Science.gov (United States)

    Šajgalík, Michal; Czán, Andrej; Martinček, Juraj; Varga, Daniel; Hemžský, Pavel; Pitela, David

    2015-12-01

    Machined surface is one of the most used indicators of workpiece quality. But machined surface is influenced by several factors such as cutting parameters, cutting material, shape of cutting tool or cutting insert, micro-structure of machined material and other known as technological parameters. By improving of these parameters, we can improve machined surface. In the machining, there is important to identify the characteristics of main product of these processes - workpiece, but also the byproduct - the chip. Size and shape of chip has impact on lifetime of cutting tools and its inappropriate form can influence the machine functionality and lifetime, too. This article deals with elimination of long chip created when machining of shaft in automotive industry and with impact of shape of chip-breaker on shape of chip in various cutting conditions based on production requirements.

  11. Large Scale Processes and Extreme Floods in Brazil

    Science.gov (United States)

    Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.

    2016-12-01

    Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).

  12. Machine rates for selected forest harvesting machines

    Science.gov (United States)

    R.W. Brinker; J. Kinard; Robert Rummer; B. Lanford

    2002-01-01

    Very little new literature has been published on the subject of machine rates and machine cost analysis since 1989 when the Alabama Agricultural Experiment Station Circular 296, Machine Rates for Selected Forest Harvesting Machines, was originally published. Many machines discussed in the original publication have undergone substantial changes in various aspects, not...

  13. COMPARISON OF STATISTICALLY CONTROLLED MACHINING SOLUTIONS OF TITANIUM ALLOYS USING USM

    Directory of Open Access Journals (Sweden)

    R. Singh

    2010-06-01

    Full Text Available The purpose of the present investigation is to compare the statistically controlled machining solution of titanium alloys using ultrasonic machining (USM. In this study, the previously developed Taguchi model for USM of titanium and its alloys has been investigated and compared. Relationships between the material removal rate, tool wear rate, surface roughness and other controllable machining parameters (power rating, tool type, slurry concentration, slurry type, slurry temperature and slurry size have been deduced. The results of this study suggest that at the best settings of controllable machining parameters for titanium alloys (based upon the Taguchi design, the machining solution with USM is statistically controlled, which is not observed for other settings of input parameters on USM.

  14. Advancing the large-scale CCS database for metabolomics and lipidomics at the machine-learning era.

    Science.gov (United States)

    Zhou, Zhiwei; Tu, Jia; Zhu, Zheng-Jiang

    2018-02-01

    Metabolomics and lipidomics aim to comprehensively measure the dynamic changes of all metabolites and lipids that are present in biological systems. The use of ion mobility-mass spectrometry (IM-MS) for metabolomics and lipidomics has facilitated the separation and the identification of metabolites and lipids in complex biological samples. The collision cross-section (CCS) value derived from IM-MS is a valuable physiochemical property for the unambiguous identification of metabolites and lipids. However, CCS values obtained from experimental measurement and computational modeling are limited available, which significantly restricts the application of IM-MS. In this review, we will discuss the recently developed machine-learning based prediction approach, which could efficiently generate precise CCS databases in a large scale. We will also highlight the applications of CCS databases to support metabolomics and lipidomics. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Parameter Scaling for Epidemic Size in a Spatial Epidemic Model with Mobile Individuals.

    Directory of Open Access Journals (Sweden)

    Chiyori T Urabe

    Full Text Available In recent years, serious infectious diseases tend to transcend national borders and widely spread in a global scale. The incidence and prevalence of epidemics are highly influenced not only by pathogen-dependent disease characteristics such as the force of infection, the latent period, and the infectious period, but also by human mobility and contact patterns. However, the effect of heterogeneous mobility of individuals on epidemic outcomes is not fully understood. Here, we aim to elucidate how spatial mobility of individuals contributes to the final epidemic size in a spatial susceptible-exposed-infectious-recovered (SEIR model with mobile individuals in a square lattice. After illustrating the interplay between the mobility parameters and the other parameters on the spatial epidemic spreading, we propose an index as a function of system parameters, which largely governs the final epidemic size. The main contribution of this study is to show that the proposed index is useful for estimating how parameter scaling affects the final epidemic size. To demonstrate the effectiveness of the proposed index, we show that there is a positive correlation between the proposed index computed with the real data of human airline travels and the actual number of positive incident cases of influenza B in the entire world, implying that the growing incidence of influenza B is attributed to increased human mobility.

  16. Making and Operating Molecular Machines: A Multidisciplinary Challenge.

    Science.gov (United States)

    Baroncini, Massimo; Casimiro, Lorenzo; de Vet, Christiaan; Groppi, Jessica; Silvi, Serena; Credi, Alberto

    2018-02-01

    Movement is one of the central attributes of life, and a key feature in many technological processes. While artificial motion is typically provided by macroscopic engines powered by internal combustion or electrical energy, movement in living organisms is produced by machines and motors of molecular size that typically exploit the energy of chemical fuels at ambient temperature to generate forces and ultimately execute functions. The progress in several areas of chemistry, together with an improved understanding of biomolecular machines, has led to the development of a large variety of wholly synthetic molecular machines. These systems have the potential to bring about radical innovations in several areas of technology and medicine. In this Minireview, we discuss, with the help of a few examples, the multidisciplinary aspects of research on artificial molecular machines and highlight its translational character.

  17. Peter J Derrick and the Grand Scale 'Magnificent Mass Machine' mass spectrometer at Warwick.

    Science.gov (United States)

    Colburn, A W; Derrick, Peter J; Bowen, Richard D

    2017-12-01

    The value of the Grand Scale 'Magnificent Mass Machine' mass spectrometer in investigating the reactivity of ions in the gas phase is illustrated by a brief analysis of previously unpublished work on metastable ionised n-pentyl methyl ether, which loses predominantly methanol and an ethyl radical, with very minor contributions for elimination of ethane and water. Expulsion of an ethyl radical is interpreted in terms of isomerisation to ionised 3-pentyl methyl ether, via distonic ions and, possibly, an ion-neutral complex comprising ionised ethylcyclopropane and methanol. This explanation is consistent with the closely similar behaviour of the labelled analogues, C 3 H 7 CH 2 CD 2 OCH 3 +. and C 3 H 7 CD 2 CH 2 OCH 3 +. , and is supported by the greater kinetic energy release associated with loss of ethane from ionised n-propyl methyl ether compared to that starting from directly generated ionised 3-pentyl methyl ether.

  18. Detecting Neolithic Burial Mounds from LiDAR-Derived Elevation Data Using a Multi-Scale Approach and Machine Learning Techniques

    Directory of Open Access Journals (Sweden)

    Alexandre Guyot

    2018-02-01

    Full Text Available Airborne LiDAR technology is widely used in archaeology and over the past decade has emerged as an accurate tool to describe anthropomorphic landforms. Archaeological features are traditionally emphasised on a LiDAR-derived Digital Terrain Model (DTM using multiple Visualisation Techniques (VTs, and occasionally aided by automated feature detection or classification techniques. Such an approach offers limited results when applied to heterogeneous structures (different sizes, morphologies, which is often the case for archaeological remains that have been altered throughout the ages. This study proposes to overcome these limitations by developing a multi-scale analysis of topographic position combined with supervised machine learning algorithms (Random Forest. Rather than highlighting individual topographic anomalies, the multi-scalar approach allows archaeological features to be examined not only as individual objects, but within their broader spatial context. This innovative and straightforward method provides two levels of results: a composite image of topographic surface structure and a probability map of the presence of archaeological structures. The method was developed to detect and characterise megalithic funeral structures in the region of Carnac, the Bay of Quiberon, and the Gulf of Morbihan (France, which is currently considered for inclusion on the UNESCO World Heritage List. As a result, known archaeological sites have successfully been geo-referenced with a greater accuracy than before (even when located under dense vegetation and a ground-check confirmed the identification of a previously unknown Neolithic burial mound in the commune of Carnac.

  19. Power Scaling of the Size Distribution of Economic Loss and Fatalities due to Hurricanes, Earthquakes, Tornadoes, and Floods in the USA

    Science.gov (United States)

    Tebbens, S. F.; Barton, C. C.; Scott, B. E.

    2016-12-01

    Traditionally, the size of natural disaster events such as hurricanes, earthquakes, tornadoes, and floods is measured in terms of wind speed (m/sec), energy released (ergs), or discharge (m3/sec) rather than by economic loss or fatalities. Economic loss and fatalities from natural disasters result from the intersection of the human infrastructure and population with the size of the natural event. This study investigates the size versus cumulative number distribution of individual natural disaster events for several disaster types in the United States. Economic losses are adjusted for inflation to 2014 USD. The cumulative number divided by the time over which the data ranges for each disaster type is the basis for making probabilistic forecasts in terms of the number of events greater than a given size per year and, its inverse, return time. Such forecasts are of interest to insurers/re-insurers, meteorologists, seismologists, government planners, and response agencies. Plots of size versus cumulative number distributions per year for economic loss and fatalities are well fit by power scaling functions of the form p(x) = Cx-β; where, p(x) is the cumulative number of events with size equal to and greater than size x, C is a constant, the activity level, x is the event size, and β is the scaling exponent. Economic loss and fatalities due to hurricanes, earthquakes, tornadoes, and floods are well fit by power functions over one to five orders of magnitude in size. Economic losses for hurricanes and tornadoes have greater scaling exponents, β = 1.1 and 0.9 respectively, whereas earthquakes and floods have smaller scaling exponents, β = 0.4 and 0.6 respectively. Fatalities for tornadoes and floods have greater scaling exponents, β = 1.5 and 1.7 respectively, whereas hurricanes and earthquakes have smaller scaling exponents, β = 0.4 and 0.7 respectively. The scaling exponents can be used to make probabilistic forecasts for time windows ranging from 1 to 1000 years

  20. Technique for Increasing Accuracy of Positioning System of Machine Tools

    Directory of Open Access Journals (Sweden)

    Sh. Ji

    2014-01-01

    Full Text Available The aim of research is to improve the accuracy of positioning and processing system using a technique for optimization of pressure diagrams of guides in machine tools. The machining quality is directly related to its accuracy, which characterizes an impact degree of various errors of machines. The accuracy of the positioning system is one of the most significant machining characteristics, which allow accuracy evaluation of processed parts.The literature describes that the working area of the machine layout is rather informative to characterize the effect of the positioning system on the macro-geometry of the part surfaces to be processed. To enhance the static accuracy of the studied machine, in principle, two groups of measures are possible. One of them points toward a decrease of the cutting force component, which overturns the slider moments. Another group of measures is related to the changing sizes of the guide facets, which may lead to their profile change.The study was based on mathematical modeling and optimization of the cutting zone coordinates. And we find the formula to determine the surface pressure of the guides. The selected parameters of optimization are vectors of the cutting force and values of slides and guides. Obtained results show that a technique for optimization of coordinates in the cutting zone was necessary to increase a processing accuracy.The research has established that to define the optimal coordinates of the cutting zone we have to change the sizes of slides, value and coordinates of applied forces, reaching the pressure equalization and improving the accuracy of positioning system of machine tools. In different points of the workspace a vector of forces is applied, pressure diagrams are found, which take into account the changes in the parameters of positioning system, and the pressure diagram equalization to provide the most accuracy of machine tools is achieved.

  1. A triple-scale crystal plasticity modeling and simulation on size effect due to fine-graining

    International Nuclear Information System (INIS)

    Kurosawa, Eisuke; Aoyagi, Yoshiteru; Tadano, Yuichi; Shizawa, Kazuyuki

    2010-01-01

    In this paper, a triple-scale crystal plasticity model bridging three hierarchical material structures, i.e., dislocation structure, grain aggregate and practical macroscopic structure is developed. Geometrically necessary (GN) dislocation density and GN incompatibility are employed so as to describe isolated dislocations and dislocation pairs in a grain, respectively. Then the homogenization method is introduced into the GN dislocation-crystal plasticity model for derivation of the governing equation of macroscopic structure with the mathematical and physical consistencies. Using the present model, a triple-scale FE simulation bridging the above three hierarchical structures is carried out for f.c.c. polycrystals with different mean grain size. It is shown that the present model can qualitatively reproduce size effects of macroscopic specimen with ultrafine-grain, i.e., the increase of initial yield stress, the decrease of hardening ratio after reaching tensile strength and the reduction of tensile ductility with decrease of its grain size. Moreover, the relationship between macroscopic yielding of specimen and microscopic grain yielding is discussed and the mechanism of the poor tensile ductility due to fine-graining is clarified. (author)

  2. Population size estimation of men who have sex with men through the network scale-up method in Japan.

    Directory of Open Access Journals (Sweden)

    Satoshi Ezoe

    Full Text Available BACKGROUND: Men who have sex with men (MSM are one of the groups most at risk for HIV infection in Japan. However, size estimates of MSM populations have not been conducted with sufficient frequency and rigor because of the difficulty, high cost and stigma associated with reaching such populations. This study examined an innovative and simple method for estimating the size of the MSM population in Japan. We combined an internet survey with the network scale-up method, a social network method for estimating the size of hard-to-reach populations, for the first time in Japan. METHODS AND FINDINGS: An internet survey was conducted among 1,500 internet users who registered with a nationwide internet-research agency. The survey participants were asked how many members of particular groups with known population sizes (firepersons, police officers, and military personnel they knew as acquaintances. The participants were also asked to identify the number of their acquaintances whom they understood to be MSM. Using these survey results with the network scale-up method, the personal network size and MSM population size were estimated. The personal network size was estimated to be 363.5 regardless of the sex of the acquaintances and 174.0 for only male acquaintances. The estimated MSM prevalence among the total male population in Japan was 0.0402% without adjustment, and 2.87% after adjusting for the transmission error of MSM. CONCLUSIONS: The estimated personal network size and MSM prevalence seen in this study were comparable to those from previous survey results based on the direct-estimation method. Estimating population sizes through combining an internet survey with the network scale-up method appeared to be an effective method from the perspectives of rapidity, simplicity, and low cost as compared with more-conventional methods.

  3. Manufacture of mirrors by NC machining of EEM

    International Nuclear Information System (INIS)

    Hongo, Toshio; Azuma, Yasuo; Kato, Haruo; Hoshino, Hideo

    1981-01-01

    In the X-ray optical system for the photon factory facility being constructed now in the National Laboratory for High Energy Physics, total reflection mirrors occupy important position. The shapes of mirrors are both plane and curved surface, and the sizes are various. Especially concerning hard X-ray, the required accuracy of the shapes and surface roughness is high. Thereupon mirrors were machined by elastic emission machining (EEM) developed by Mori et al. of Osaka University, and the flatness and surface roughness were examined. The materials machined were Pyrex and copper, the mirror finish of which is difficult. The results are reported. In this machining method, the liquid in which very fine powder is uniformly dispersed and suspended in water was used. By approaching a rotating urethane ball to a work surface, the gap of about 1 μm was formed between them utilizing fluid bearing-like flow arising there. The machining was carried out by colliding the fine particles in suspension to a minute region of the work surface. In order to obtain an arbitrary curved surface, the numerical control according to the variable controling the amount of machining was made. In the case of glasses, the amount of machining was able to be controlled to about 0.01 μm. As for polycrystalline copper, the machining was difficult, and the suitable conditions must be sought hereafter. (Kako, I.)

  4. Economic lifetime of a drilling machine:a case study on mining industry

    OpenAIRE

    Hamodi, Hussan; Lundberg, Jan; Jonsson, Adam

    2013-01-01

    Underground mines use many different types of machinery duringthe drift mining processes of drilling, charging, blasting, loading, scaling andbolting. Drilling machines play a critical role in the mineral extraction processand thus are important economically. However, as the machines age, theirefficiency and effectiveness decrease, negatively affecting productivity andprofitability and increasing total cost. Hence, the economic replacementlifetime of the machine is a key performance indicator...

  5. Extreme value statistics and finite-size scaling at the ecological extinction/laminar-turbulence transition

    Science.gov (United States)

    Shih, Hong-Yan; Goldenfeld, Nigel

    Experiments on transitional turbulence in pipe flow seem to show that turbulence is a transient metastable state since the measured mean lifetime of turbulence puffs does not diverge asymptotically at a critical Reynolds number. Yet measurements reveal that the lifetime scales with Reynolds number in a super-exponential way reminiscent of extreme value statistics, and simulations and experiments in Couette and channel flow exhibit directed percolation type scaling phenomena near a well-defined transition. This universality class arises from the interplay between small-scale turbulence and a large-scale collective zonal flow, which exhibit predator-prey behavior. Why is asymptotically divergent behavior not observed? Using directed percolation and a stochastic individual level model of predator-prey dynamics related to transitional turbulence, we investigate the relation between extreme value statistics and power law critical behavior, and show that the paradox is resolved by carefully defining what is measured in the experiments. We theoretically derive the super-exponential scaling law, and using finite-size scaling, show how the same data can give both super-exponential behavior and power-law critical scaling.

  6. An incremental anomaly detection model for virtual machines.

    Directory of Open Access Journals (Sweden)

    Hancui Zhang

    Full Text Available Self-Organizing Map (SOM algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform.

  7. An incremental anomaly detection model for virtual machines

    Science.gov (United States)

    Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu

    2017-01-01

    Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform. PMID:29117245

  8. Findings of the 2010 Joint Workshop on Statistical Machine Translation and Metrics for Machine Translation

    NARCIS (Netherlands)

    Callison-Burch, C.; Koehn, P.; Monz, C.; Peterson, K.; Przybocki, M.; Zaidan, O.F.

    2010-01-01

    This paper presents the results of the WMT10 and MetricsMATR10 shared tasks, which included a translation task, a system combination task, and an evaluation task. We conducted a large-scale manual evaluation of 104 machine translation systems and 41 system combination entries. We used the ranking of

  9. In-situ particle sizing at millimeter scale from electrochemical noise: simulation and experiments

    International Nuclear Information System (INIS)

    Yakdi, N.; Huet, F.; Ngo, K.

    2015-01-01

    Over the last few years, particle sizing techniques in multiphase flows based on optical technologies emerged as standard tools but the main disadvantage of these techniques is their dependence on the visibility of the measurement volume and on the focal distance. Thus, it is important to promote alternative techniques for particle sizing, and, moreover, able to work in hostile environment. This paper presents a single-particle sizing technique at a millimeter scale based on the measurement of the variation of the electrolyte resistance (ER) due to the passage of an insulating sphere between two electrodes immerged in a conductive solution. A theoretical model was proposed to determine the influence of the electrode size, the interelectrode distance, the size and the position of the sphere, on the electrolyte resistance. Experimental variations of ER due to the passage of spheres and measured by using a home-made electronic device are also presented in this paper. The excellent agreement obtained between the theoretical and experimental results allows validation of both model and experimental measurements. In addition, the technique was shown to be able to perform accurate measurements of the velocity of a ball falling in a liquid.

  10. Nasonia Parasitic Wasps Escape from Haller's Rule by Diphasic, Partially Isometric Brain-Body Size Scaling and Selective Neuropil Adaptations

    NARCIS (Netherlands)

    Groothuis, Jitte; Smid, Hans M.

    2017-01-01

    Haller's rule states that brains scale allometrically with body size in all animals, meaning that relative brain size increases with decreasing body size. This rule applies both on inter- and intraspecific comparisons. Only 1 species, the extremely small parasitic wasp Trichogramma evanescens, is

  11. Effect of microstructure and cutting speed on machining behavior of Ti6Al4V alloy

    Energy Technology Data Exchange (ETDEWEB)

    Telrandhe, Sagar V.; Mishra, Sushil; Saxena, Ashish K. [Indian Institute of Technology Bombay, Mumbai (India)

    2017-05-15

    Machining of aerospace and biomedical grade titanium alloys has always been a challenge because of their low conductivity and elastic modulus. Different machining methods and parameters have been adopted for high precision machining of titanium alloys. Machining of titanium alloys can be improved by microstructure optimization. The present study focuses on the effect of microstructure on ma- chinability of Ti6Al4V alloys at different cutting speeds. Samples were subjected to different annealing conditions resulting in different grain sizes and local micro-strains (misorientation). Cutting forces were significantly reduced after annealing; consequently, sub-surface residual stresses were reduced. Deformation twinning was also observed on samples annealed at a higher temperature due to larger grain size. Initial strain free grains and deformation twinning during machining reduces the cutting force at higher cutting speed.

  12. Potential Size of and Value Proposition for H2@Scale Concept

    Energy Technology Data Exchange (ETDEWEB)

    Ruth, Mark F [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Jadun, Paige [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Pivovar, Bryan S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Elgowainy, Amgad [Argonne National Laboratory

    2017-11-09

    The H2@Scale concept is focused on developing hydrogen as an energy carrier and using hydrogen's properties to improve the national energy system. Specifically hydrogen has the abilities to (1) supply a clean energy source for industry and transportation and (2) increase the profitability of variable renewable electricity generators such as wind turbines and solar photovoltaic (PV) farms by providing value for otherwise potentially-curtailed electricity. Thus the concept also has the potential to reduce oil dependency by providing a low-carbon fuel for fuel cell electric vehicles (FCEVs), reduce emissions of carbon dioxide and pollutants such as NOx, and support domestic energy production, manufacturing, and U.S. economic competitiveness. The analysis reported here focuses on the potential market size and value proposition for the H2@Scale concept. It involves three analysis phases: 1. Initial phase estimating the technical potential for hydrogen markets and the resources required to meet them; 2. National-scale analysis of the economic potential for hydrogen and the interactions between willingness to pay by hydrogen users and the cost to produce hydrogen from various sources; and 3. In-depth analysis of spatial and economic issues impacting hydrogen production and utilization and the markets. Preliminary analysis of the technical potential indicates that the technical potential for hydrogen use is approximately 60 million metric tons (MMT) annually for light duty FCEVs, heavy duty vehicles, ammonia production, oil refining, biofuel hydrotreating, metals refining, and injection into the natural gas system. The technical potential of utility-scale PV and wind generation independently are much greater than that necessary to produce 60 MMT / year hydrogen. Uranium, natural gas, and coal reserves are each sufficient to produce 60 MMT / year hydrogen in addition to their current uses for decades to centuries. National estimates of the economic potential of

  13. Machine learning: novel bioinformatics approaches for combating antimicrobial resistance.

    Science.gov (United States)

    Macesic, Nenad; Polubriaginof, Fernanda; Tatonetti, Nicholas P

    2017-12-01

    Antimicrobial resistance (AMR) is a threat to global health and new approaches to combating AMR are needed. Use of machine learning in addressing AMR is in its infancy but has made promising steps. We reviewed the current literature on the use of machine learning for studying bacterial AMR. The advent of large-scale data sets provided by next-generation sequencing and electronic health records make applying machine learning to the study and treatment of AMR possible. To date, it has been used for antimicrobial susceptibility genotype/phenotype prediction, development of AMR clinical decision rules, novel antimicrobial agent discovery and antimicrobial therapy optimization. Application of machine learning to studying AMR is feasible but remains limited. Implementation of machine learning in clinical settings faces barriers to uptake with concerns regarding model interpretability and data quality.Future applications of machine learning to AMR are likely to be laboratory-based, such as antimicrobial susceptibility phenotype prediction.

  14. Multi-response optimization of machining characteristics in ultrasonic machining of WC-Co composite through Taguchi method and grey-fuzzy logic

    Directory of Open Access Journals (Sweden)

    Ravi Pratap Singh

    2018-01-01

    Full Text Available This article addresses the application of grey based fuzzy logic coupled with Taguchi’s approach for optimization of multi performance characteristics in ultrasonic machining of WC-Co composite material. The Taguchi’s L-36 array has been employed to conduct the experimentation and also to observe the influence of different process variables (power rating, cobalt content, tool geometry, thickness of work piece, tool material, abrasive grit size on machining characteristics. Grey relational fuzzy grade has been computed by converting the multiple responses, i.e., material removal rate and tool wear rate obtained from Taguchi’s approach into a single performance characteristic using grey based fuzzy logic. In addition, analysis of variance (ANOVA has also been attempted in a view to identify the significant parameters. Results revealed grit size and power rating as leading parameters for optimization of multi performance characteristics. From the microstructure analysis, the mode of material deformation has been observed and the critical parameters (i.e., work material properties, grit size, and power rating for the deformation mode have been established.

  15. [A new machinability test machine and the machinability of composite resins for core built-up].

    Science.gov (United States)

    Iwasaki, N

    2001-06-01

    A new machinability test machine especially for dental materials was contrived. The purpose of this study was to evaluate the effects of grinding conditions on machinability of core built-up resins using this machine, and to confirm the relationship between machinability and other properties of composite resins. The experimental machinability test machine consisted of a dental air-turbine handpiece, a control weight unit, a driving unit of the stage fixing the test specimen, and so on. The machinability was evaluated as the change in volume after grinding using a diamond point. Five kinds of core built-up resins and human teeth were used in this study. The machinabilities of these composite resins increased with an increasing load during grinding, and decreased with repeated grinding. There was no obvious correlation between the machinability and Vickers' hardness; however, a negative correlation was observed between machinability and scratch width.

  16. Quantum Monte Carlo for large chemical systems: implementing efficient strategies for peta scale platforms and beyond

    International Nuclear Information System (INIS)

    Scemama, Anthony; Caffarel, Michel; Oseret, Emmanuel; Jalby, William

    2013-01-01

    Various strategies to implement efficiently quantum Monte Carlo (QMC) simulations for large chemical systems are presented. These include: (i) the introduction of an efficient algorithm to calculate the computationally expensive Slater matrices. This novel scheme is based on the use of the highly localized character of atomic Gaussian basis functions (not the molecular orbitals as usually done), (ii) the possibility of keeping the memory footprint minimal, (iii) the important enhancement of single-core performance when efficient optimization tools are used, and (iv) the definition of a universal, dynamic, fault-tolerant, and load-balanced framework adapted to all kinds of computational platforms (massively parallel machines, clusters, or distributed grids). These strategies have been implemented in the QMC-Chem code developed at Toulouse and illustrated with numerical applications on small peptides of increasing sizes (158, 434, 1056, and 1731 electrons). Using 10-80 k computing cores of the Curie machine (GENCI-TGCC-CEA, France), QMC-Chem has been shown to be capable of running at the peta scale level, thus demonstrating that for this machine a large part of the peak performance can be achieved. Implementation of large-scale QMC simulations for future exa scale platforms with a comparable level of efficiency is expected to be feasible. (authors)

  17. A methodology to investigate size scale effects in crystalline plasticity using uniaxial compression testing

    International Nuclear Information System (INIS)

    Uchic, Michael D.; Dimiduk, Dennis M.

    2005-01-01

    A methodology for performing uniaxial compression tests on samples having micron-size dimensions is presented. Sample fabrication is accomplished using focused ion beam milling to create cylindrical samples of uniform cross-section that remain attached to the bulk substrate at one end. Once fabricated, samples are tested in uniaxial compression using a nanoindentation device outfitted with a flat tip, and a stress-strain curve is obtained. The methodology can be used to examine the plastic response of samples of different sizes that are from the same bulk material. In this manner, dimensional size effects at the micron scale can be explored for single crystals, using a readily interpretable test that minimizes imposed stretch and bending gradients. The methodology was applied to a single-crystal Ni superalloy and a transition from bulk-like to size-affected behavior was observed for samples 5 μm in diameter and smaller

  18. Machine learning for large-scale wearable sensor data in Parkinson's disease: Concepts, promises, pitfalls, and futures.

    Science.gov (United States)

    Kubota, Ken J; Chen, Jason A; Little, Max A

    2016-09-01

    For the treatment and monitoring of Parkinson's disease (PD) to be scientific, a key requirement is that measurement of disease stages and severity is quantitative, reliable, and repeatable. The last 50 years in PD research have been dominated by qualitative, subjective ratings obtained by human interpretation of the presentation of disease signs and symptoms at clinical visits. More recently, "wearable," sensor-based, quantitative, objective, and easy-to-use systems for quantifying PD signs for large numbers of participants over extended durations have been developed. This technology has the potential to significantly improve both clinical diagnosis and management in PD and the conduct of clinical studies. However, the large-scale, high-dimensional character of the data captured by these wearable sensors requires sophisticated signal processing and machine-learning algorithms to transform it into scientifically and clinically meaningful information. Such algorithms that "learn" from data have shown remarkable success in making accurate predictions for complex problems in which human skill has been required to date, but they are challenging to evaluate and apply without a basic understanding of the underlying logic on which they are based. This article contains a nontechnical tutorial review of relevant machine-learning algorithms, also describing their limitations and how these can be overcome. It discusses implications of this technology and a practical road map for realizing the full potential of this technology in PD research and practice. © 2016 International Parkinson and Movement Disorder Society. © 2016 International Parkinson and Movement Disorder Society.

  19. Optimal Rules for Single Machine Scheduling with Stochastic Breakdowns

    Directory of Open Access Journals (Sweden)

    Jinwei Gu

    2014-01-01

    Full Text Available This paper studies the problem of scheduling a set of jobs on a single machine subject to stochastic breakdowns, where jobs have to be restarted if preemptions occur because of breakdowns. The breakdown process of the machine is independent of the jobs processed on the machine. The processing times required to complete the jobs are constants if no breakdown occurs. The machine uptimes are independently and identically distributed (i.i.d. and are subject to a uniform distribution. It is proved that the Longest Processing Time first (LPT rule minimizes the expected makespan. For the large-scale problem, it is also showed that the Shortest Processing Time first (SPT rule is optimal to minimize the expected total completion times of all jobs.

  20. The laser micro-machining system for diamond anvil cell experiments and general precision machining applications at the High Pressure Collaborative Access Team.

    Science.gov (United States)

    Hrubiak, Rostislav; Sinogeikin, Stanislav; Rod, Eric; Shen, Guoyin

    2015-07-01

    We have designed and constructed a new system for micro-machining parts and sample assemblies used for diamond anvil cells and general user operations at the High Pressure Collaborative Access Team, sector 16 of the Advanced Photon Source. The new micro-machining system uses a pulsed laser of 400 ps pulse duration, ablating various materials without thermal melting, thus leaving a clean edge. With optics designed for a tight focus, the system can machine holes any size larger than 3 μm in diameter. Unlike a standard electrical discharge machining drill, the new laser system allows micro-machining of non-conductive materials such as: amorphous boron and silicon carbide gaskets, diamond, oxides, and other materials including organic materials such as polyimide films (i.e., Kapton). An important feature of the new system is the use of gas-tight or gas-flow environmental chambers which allow the laser micro-machining to be done in a controlled (e.g., inert gas) atmosphere to prevent oxidation and other chemical reactions in air sensitive materials. The gas-tight workpiece enclosure is also useful for machining materials with known health risks (e.g., beryllium). Specialized control software with a graphical interface enables micro-machining of custom 2D and 3D shapes. The laser-machining system was designed in a Class 1 laser enclosure, i.e., it includes laser safety interlocks and computer controls and allows for routine operation. Though initially designed mainly for machining of the diamond anvil cell gaskets, the laser-machining system has since found many other micro-machining applications, several of which are presented here.

  1. The laser micro-machining system for diamond anvil cell experiments and general precision machining applications at the High Pressure Collaborative Access Team

    International Nuclear Information System (INIS)

    Hrubiak, Rostislav; Sinogeikin, Stanislav; Rod, Eric; Shen, Guoyin

    2015-01-01

    We have designed and constructed a new system for micro-machining parts and sample assemblies used for diamond anvil cells and general user operations at the High Pressure Collaborative Access Team, sector 16 of the Advanced Photon Source. The new micro-machining system uses a pulsed laser of 400 ps pulse duration, ablating various materials without thermal melting, thus leaving a clean edge. With optics designed for a tight focus, the system can machine holes any size larger than 3 μm in diameter. Unlike a standard electrical discharge machining drill, the new laser system allows micro-machining of non-conductive materials such as: amorphous boron and silicon carbide gaskets, diamond, oxides, and other materials including organic materials such as polyimide films (i.e., Kapton). An important feature of the new system is the use of gas-tight or gas-flow environmental chambers which allow the laser micro-machining to be done in a controlled (e.g., inert gas) atmosphere to prevent oxidation and other chemical reactions in air sensitive materials. The gas-tight workpiece enclosure is also useful for machining materials with known health risks (e.g., beryllium). Specialized control software with a graphical interface enables micro-machining of custom 2D and 3D shapes. The laser-machining system was designed in a Class 1 laser enclosure, i.e., it includes laser safety interlocks and computer controls and allows for routine operation. Though initially designed mainly for machining of the diamond anvil cell gaskets, the laser-machining system has since found many other micro-machining applications, several of which are presented here

  2. DYNAMIC TENSILE TESTING WITH A LARGE SCALE 33 MJ ROTATING DISK IMPACT MACHINE

    OpenAIRE

    Kussmaul , K.; Zimmermann , C.; Issler , W.

    1985-01-01

    A recently completed testing machine for dynamic tensile tests is described. The machine consists essentially of a pendulum which holds the specimen and a large steel disk with a double striking nose fixed to its circumference. Disk diameter measures 2000 mm, while its mass is 6400 kg. The specimens to be tested are tensile specimens with a diameter of up to 20 mm and 300 mm length or CT 15 specimens at various temperatures. Loading velocity ranges from 1 to 150 m/s. The process of specimen-n...

  3. Size-dependent elastic/inelastic behavior of enamel over millimeter and nanometer length scales.

    Science.gov (United States)

    Ang, Siang Fung; Bortel, Emely L; Swain, Michael V; Klocke, Arndt; Schneider, Gerold A

    2010-03-01

    The microstructure of enamel like most biological tissues has a hierarchical structure which determines their mechanical behavior. However, current studies of the mechanical behavior of enamel lack a systematic investigation of these hierarchical length scales. In this study, we performed macroscopic uni-axial compression tests and the spherical indentation with different indenter radii to probe enamel's elastic/inelastic transition over four hierarchical length scales, namely: 'bulk enamel' (mm), 'multiple-rod' (10's microm), 'intra-rod' (100's nm with multiple crystallites) and finally 'single-crystallite' (10's nm with an area of approximately one hydroxyapatite crystallite). The enamel's elastic/inelastic transitions were observed at 0.4-17 GPa depending on the length scale and were compared with the values of synthetic hydroxyapatite crystallites. The elastic limit of a material is important as it provides insights into the deformability of the material before fracture. At the smallest investigated length scale (contact radius approximately 20 nm), elastic limit is followed by plastic deformation. At the largest investigated length scale (contact size approximately 2 mm), only elastic then micro-crack induced response was observed. A map of elastic/inelastic regions of enamel from millimeter to nanometer length scale is presented. Possible underlying mechanisms are also discussed. (c) 2009 Elsevier Ltd. All rights reserved.

  4. A measurement strategy and an error-compensation model for the on-machine laser measurement of large-scale free-form surfaces

    International Nuclear Information System (INIS)

    Li, Bin; Li, Feng; Liu, Hongqi; Cai, Hui; Mao, Xinyong; Peng, Fangyu

    2014-01-01

    This study presents a novel measurement strategy and an error-compensation model for the measurement of large-scale free-form surfaces in on-machine laser measurement systems. To improve the measurement accuracy, the effects of the scan depth, surface roughness, incident angle and azimuth angle on the measurement results were investigated experimentally, and a practical measurement strategy considering the position and orientation of the sensor is presented. Also, a semi-quantitative model based on geometrical optics is proposed to compensate for the measurement error associated with the incident angle. The normal vector of the measurement point is determined using a cross-curve method from the acquired surface data. Then, the azimuth angle and incident angle are calculated to inform the measurement strategy and error-compensation model, respectively. The measurement strategy and error-compensation model are verified through the measurement of a large propeller blade on a heavy machine tool in a factory environment. The results demonstrate that the strategy and the model are effective in increasing the measurement accuracy. (paper)

  5. Pythagorean Means and Carnot Machines

    Indian Academy of Sciences (India)

    When Music Meets Heat. Ramandeep S Johal ... found their use in representing ratios on a musical scale (see Box. 1). There are ... two legs of a journey, spending equal time t in each of them, .... a pump between Ti and Tc, and require that Rhi = Pic. This ... We can make similar statements, when the machines in case are.

  6. A portable continuous blood purification machine for emergency rescue in disasters.

    Science.gov (United States)

    He, Ping; Zhou, Chunhua; Li, Hongyan; Yu, Yongwu; Dong, Zhen; Wen, Yuanyuan; Li, Ping; Tang, Wenhong; Wang, Xue

    2012-01-01

    Continuous renal replacement therapy plays an important role in emergency rescue. Currently, no continuous renal replacement therapy machine can be used under unstable conditions as the fluid flow of these machines is controlled electronically. A novel machine that can provide emergency continuous renal replacement therapy in disaster rescue is therefore needed. Based on a volumetric metering method, a prototype portable continuous blood purifier based on a volumetric metering method was developed. Basic performance tests, special environmental tests, animal experiments and clinical use of the novel machine were completed to test and verify its performance under unstable conditions. All tests completed showed that the machine met the requirements of the national industry standards with a size reduced to approximately one half of the Baxter Aquarius machine. The clearance of harmful substances by the machine described here was equal to that of the Baxter Aquarius machine and was adequate for clinical purposes. The novel prototype performed well in all situations tested and can aid rescue work on disaster sites. Copyright © 2012 S. Karger AG, Basel.

  7. TEA CO2 laser machining of CFRP composite

    Science.gov (United States)

    Salama, A.; Li, L.; Mativenga, P.; Whitehead, D.

    2016-05-01

    Carbon fibre-reinforced polymer (CFRP) composites have found wide applications in the aerospace, marine, sports and automotive industries owing to their lightweight and acceptable mechanical properties compared to the commonly used metallic materials. Machining of CFRP composites using lasers can be challenging due to inhomogeneity in the material properties and structures, which can lead to thermal damages during laser processing. In the previous studies, Nd:YAG, diode-pumped solid-state, CO2 (continuous wave), disc and fibre lasers were used in cutting CFRP composites and the control of damages such as the size of heat-affected zones (HAZs) remains a challenge. In this paper, a short-pulsed (8 μs) transversely excited atmospheric pressure CO2 laser was used, for the first time, to machine CFRP composites. The laser has high peak powers (up to 250 kW) and excellent absorption by both the carbon fibre and the epoxy binder. Design of experiment and statistical modelling, based on response surface methodology, was used to understand the interactions between the process parameters such as laser fluence, repetition rate and cutting speed and their effects on the cut quality characteristics including size of HAZ, machining depth and material removal rate (MRR). Based on this study, process parameter optimization was carried out to minimize the HAZ and maximize the MRR. A discussion is given on the potential applications and comparisons to other lasers in machining CFRP.

  8. Investigation of approximate models of experimental temperature characteristics of machines

    Science.gov (United States)

    Parfenov, I. V.; Polyakov, A. N.

    2018-05-01

    This work is devoted to the investigation of various approaches to the approximation of experimental data and the creation of simulation mathematical models of thermal processes in machines with the aim of finding ways to reduce the time of their field tests and reducing the temperature error of the treatments. The main methods of research which the authors used in this work are: the full-scale thermal testing of machines; realization of various approaches at approximation of experimental temperature characteristics of machine tools by polynomial models; analysis and evaluation of modelling results (model quality) of the temperature characteristics of machines and their derivatives up to the third order in time. As a result of the performed researches, rational methods, type, parameters and complexity of simulation mathematical models of thermal processes in machine tools are proposed.

  9. Fabrication of a micro-hole array on metal foil by nanosecond pulsed laser beam machining using a cover plate

    International Nuclear Information System (INIS)

    Ha, Kyoung Ho; Lee, Se Won; Jee, Won Young; Chu, Chong Nam; Kim, Janggil

    2015-01-01

    A novel laser beam machining (LBM) method is proposed to achieve higher precision and better quality beyond the limits of a commercialized nanosecond pulsed laser system. The use of a cover plate is found to be effective for the precision machining of a thin metal foil at micro scale. For verifying the capability of cover plate laser beam machining (c-LBM) technology, a 30 by 30 array of micro-holes was fabricated on 8 µm-thick stainless steel 304 (STS) foil. As a result, thermal deformation and cracks were significantly reduced in comparison with the results using LBM without a cover plate. The standard deviation of the inscribed and circumscribed circle of the holes with a diameter of 12 µm was reduced to 33% and 81%, respectively and the average roundness improved by 77%. Moreover, the smallest diameter obtainable by c-LBM in the given equipment was found to be 6.9 µm, which was 60% less than the minimum size hole by LBM without a cover plate. (technical note)

  10. Marine snow microbial communities: scaling of abundances with aggregate size

    DEFF Research Database (Denmark)

    Kiørboe, Thomas

    2003-01-01

    Marine aggregates are inhabited by diverse microbial communities, and the concentration of attached microbes typically exceeds concentrations in the ambient water by orders of magnitude. An extension of the classical Lotka-Volterra model, which includes 3 trophic levels (bacteria, flagellates...... are controlled by flagellate grazing, while flagellate and ciliate populations are governed by colonization and detachment. The model also suggests that microbial populations are turned over rapidly (1 to 20 times d-1) due to continued colonization and detachment. The model overpredicts somewhat the scaling...... of microbial abundances with aggregate size observed in field-collected aggregates. This may be because it disregards the aggregation/disaggregation dynamics of aggregates, as well as interspecific interactions between bacteria....

  11. Modeling of thermal spalling during electrical discharge machining of titanium diboride

    International Nuclear Information System (INIS)

    Gadalla, A.M.; Bozkurt, B.; Faulk, N.M.

    1991-01-01

    Erosion in electrical discharge machining has been described as occurring by melting and flushing the liquid formed. Recently, however, thermal spalling was reported as the mechanism for machining refractory materials with low thermal conductivity and high thermal expansion. The process is described in this paper by a model based on a ceramic surface exposed to a constant circular heating source which supplied a constant flux over the pulse duration. The calculations were based on TiB 2 mechanical properties along a and c directions. Theoretical predictions were verified by machining hexagonal TiB 2 . Large flakes of TiB 2 with sizes close to grain size and maximum thickness close to the predicted values were collected, together with spherical particles of Cu and Zn eroded from cutting wire. The cutting surfaces consist of cleavage planes sometimes contaminated with Cu, Zn, and impurities from the dielectric fluid

  12. The Machine within the Machine

    CERN Multimedia

    Katarina Anthony

    2014-01-01

    Although Virtual Machines are widespread across CERN, you probably won't have heard of them unless you work for an experiment. Virtual machines - known as VMs - allow you to create a separate machine within your own, allowing you to run Linux on your Mac, or Windows on your Linux - whatever combination you need.   Using a CERN Virtual Machine, a Linux analysis software runs on a Macbook. When it comes to LHC data, one of the primary issues collaborations face is the diversity of computing environments among collaborators spread across the world. What if an institute cannot run the analysis software because they use different operating systems? "That's where the CernVM project comes in," says Gerardo Ganis, PH-SFT staff member and leader of the CernVM project. "We were able to respond to experimentalists' concerns by providing a virtual machine package that could be used to run experiment software. This way, no matter what hardware they have ...

  13. Expansion mechanisms for indigenously developed horizontal honing machines (Paper No. 06)

    International Nuclear Information System (INIS)

    Murthy, G.S.K.; Devarajan, N.

    1987-02-01

    Coolant channel components for nuclear reactors require scratch free and smooth interior surfaces in addition to control on size. This calls for finish machining by honing process. At the time when these were required to be made, there were no manufacturers in India who were making honing machines especially of horizontal type. In order to meet this requirement, Central Workshops of Bhabha Atomic Research Centre developed and manufactured two horizontal honing machines which can handle tubes upto three metres in length. One of the machines has been so made to accommodate jobs upto six metres in length. Stone expansion mechanisms used in these machines were of automatic hydraulic type combined with a mechanical expansion device. Details of these mechanisms have been discussed in this paper. (author). 3 figs

  14. Virtual Machine Language 2.1

    Science.gov (United States)

    Riedel, Joseph E.; Grasso, Christopher A.

    2012-01-01

    VML (Virtual Machine Language) is an advanced computing environment that allows spacecraft to operate using mechanisms ranging from simple, time-oriented sequencing to advanced, multicomponent reactive systems. VML has developed in four evolutionary stages. VML 0 is a core execution capability providing multi-threaded command execution, integer data types, and rudimentary branching. VML 1 added named parameterized procedures, extensive polymorphism, data typing, branching, looping issuance of commands using run-time parameters, and named global variables. VML 2 added for loops, data verification, telemetry reaction, and an open flight adaptation architecture. VML 2.1 contains major advances in control flow capabilities for executable state machines. On the resource requirements front, VML 2.1 features a reduced memory footprint in order to fit more capability into modestly sized flight processors, and endian-neutral data access for compatibility with Intel little-endian processors. Sequence packaging has been improved with object-oriented programming constructs and the use of implicit (rather than explicit) time tags on statements. Sequence event detection has been significantly enhanced with multi-variable waiting, which allows a sequence to detect and react to conditions defined by complex expressions with multiple global variables. This multi-variable waiting serves as the basis for implementing parallel rule checking, which in turn, makes possible executable state machines. The new state machine feature in VML 2.1 allows the creation of sophisticated autonomous reactive systems without the need to develop expensive flight software. Users specify named states and transitions, along with the truth conditions required, before taking transitions. Transitions with the same signal name allow separate state machines to coordinate actions: the conditions distributed across all state machines necessary to arm a particular signal are evaluated, and once found true, that

  15. Effects of Isometric Brain-Body Size Scaling on the Complexity of Monoaminergic Neurons in a Minute Parasitic Wasp

    NARCIS (Netherlands)

    Woude, van der Emma; Smid, Hans M.

    2017-01-01

    Trichogramma evanescens parasitic wasps show large phenotypic plasticity in brain and body size, resulting in a 5-fold difference in brain volume among genetically identical sister wasps. Brain volume scales linearly with body volume in these wasps. This isometric brain scaling forms an exception to

  16. The maximum sizes of large scale structures in alternative theories of gravity

    Energy Technology Data Exchange (ETDEWEB)

    Bhattacharya, Sourav [IUCAA, Pune University Campus, Post Bag 4, Ganeshkhind, Pune, 411 007 India (India); Dialektopoulos, Konstantinos F. [Dipartimento di Fisica, Università di Napoli ' Federico II' , Complesso Universitario di Monte S. Angelo, Edificio G, Via Cinthia, Napoli, I-80126 Italy (Italy); Romano, Antonio Enea [Instituto de Física, Universidad de Antioquia, Calle 70 No. 52–21, Medellín (Colombia); Skordis, Constantinos [Department of Physics, University of Cyprus, 1 Panepistimiou Street, Nicosia, 2109 Cyprus (Cyprus); Tomaras, Theodore N., E-mail: sbhatta@iitrpr.ac.in, E-mail: kdialekt@gmail.com, E-mail: aer@phys.ntu.edu.tw, E-mail: skordis@ucy.ac.cy, E-mail: tomaras@physics.uoc.gr [Institute of Theoretical and Computational Physics and Department of Physics, University of Crete, 70013 Heraklion (Greece)

    2017-07-01

    The maximum size of a cosmic structure is given by the maximum turnaround radius—the scale where the attraction due to its mass is balanced by the repulsion due to dark energy. We derive generic formulae for the estimation of the maximum turnaround radius in any theory of gravity obeying the Einstein equivalence principle, in two situations: on a spherically symmetric spacetime and on a perturbed Friedman-Robertson-Walker spacetime. We show that the two formulae agree. As an application of our formula, we calculate the maximum turnaround radius in the case of the Brans-Dicke theory of gravity. We find that for this theory, such maximum sizes always lie above the ΛCDM value, by a factor 1 + 1/3ω, where ω>> 1 is the Brans-Dicke parameter, implying consistency of the theory with current data.

  17. Fault Diagnosis for Distribution Networks Using Enhanced Support Vector Machine Classifier with Classical Multidimensional Scaling

    Directory of Open Access Journals (Sweden)

    Ming-Yuan Cho

    2017-09-01

    Full Text Available In this paper, a new fault diagnosis techniques based on time domain reflectometry (TDR method with pseudo-random binary sequence (PRBS stimulus and support vector machine (SVM classifier has been investigated to recognize the different types of fault in the radial distribution feeders. This novel technique has considered the amplitude of reflected signals and the peaks of cross-correlation (CCR between the reflected and incident wave for generating fault current dataset for SVM. Furthermore, this multi-layer enhanced SVM classifier is combined with classical multidimensional scaling (CMDS feature extraction algorithm and kernel parameter optimization to increase training speed and improve overall classification accuracy. The proposed technique has been tested on a radial distribution feeder to identify ten different types of fault considering 12 input features generated by using Simulink software and MATLAB Toolbox. The success rate of SVM classifier is over 95% which demonstrates the effectiveness and the high accuracy of proposed method.

  18. On the hydrodynamics and the scale-up of flotation processes

    International Nuclear Information System (INIS)

    Schubert, H.

    1986-01-01

    In flotation machines, turbulence is process-determining. Macroturbulence is necessary for suspension, microturbulence controls the air dispersion, the rate of the particle-bubble collisions and the stresses on agglomerates. Consequently, the hydrodynamic optimization of flotation processes plays an important role for the flotation efficiency. In the paper the following aspects are considered: the turbulent microprocesses of flotation processes; the integral hydrodynamic characterization of flotation processes; correlations between particle size and optimum hydrodynamics; correlations between flocculation of fine particles and optimum-hydrodynamics; and hydrodynamic scale-up of flotation processes

  19. Electric-Discharge Machining Techniques for Evaluating Tritium Effects on Materials

    International Nuclear Information System (INIS)

    Morgan, M.J.

    2003-01-01

    In this investigation, new ways to evaluate the long-term effects of tritium on the structural properties of components were developed. Electric-discharge machining (EDM) techniques for cutting tensile and fracture toughness samples from tritium exposed regions of returned reservoirs were demonstrated. An existing electric discharge machine was used to cut sub-size tensile and fracture toughness samples from the inside surfaces of reservoir mockups. Tensile properties from the EDM tensile samples were similar to those measured using full-size samples cut from similar stock. Although the existing equipment could not be used for machining tritium-exposed hardware, off-the shelf EDM units are available that could. With the right equipment and the required radiological controls in place, similar machining and testing techniques could be used to directly measure the effects of tritium on the properties of material cut from reservoir returns. Stress-strain properties from tritium-exposed reservoirs would improve finite element modeling of reservoir performance because the data would be representative of the true state of the reservoir material in the field. Tensile data from samples cut directly from reservoirs would also complement existing shelf storage and burst test data of the Life Storage Program and help answer questions about a specific reservoir's processing history and properties

  20. Performance of a Horizontal Double Cylinder Type of Fresh Coffee Cherries Pulping Machine

    Directory of Open Access Journals (Sweden)

    Sukrisno Widyotomo

    2009-05-01

    Full Text Available Pulping is one important step in wet coffee processing method. Usually, pulping process uses a machine which constructed using wood or metal materials. A horizontal single cylinder type coffee pulping machine is the most popular machine in coffee processor and market. One of the weakness of a horizontal single cylinder type coffee pulping machine is high of broken beans. Broken beans is one of major aspect in defect system that result in low quality. Indonesian Coffee and Cocoa Research Institute has designed and tested a horizontal double cylinder type coffee pulping machine. Material tested is Robusta cherry, mature, 60—65% (wet basis moisture content, which size compostition of coffee cherries was 50.8% more than 15 mm diameter, 32% more than 10 mm diameter, and 16.6% to get through 10 mm hole diameter; 690—695 kg/m3 bulk density, and clean from methal and foreign materials. The result showed that this machine has 420 kg/h optimal capacity in operational conditions, 1400 rpm rotor rotation speed for unsorted coffee cherries with composition 53.08% whole parchment coffee, 16.92% broken beans, and 30% beans in the wet skin. For small size coffee cherries, 603 kg/h optimal capacity in operational conditions, 1600 rpm rotor rotation speed with composition 51.30% whole parchment coffee, 12.59% broken beans, and 36.1% beans in the wet skin. Finally, for medium size coffee cherries, 564 kg/h optimal capacity in operational conditions, 1800 rpm rotor rotation speed with composition 48.64% whole parchment coffee, 18.5% broken beans, and 32.86% beans in the wet skin.Key words : coffee, pulp, pulper, cylinder, quality.

  1. Finite-size scaling method for the Berezinskii–Kosterlitz–Thouless transition

    International Nuclear Information System (INIS)

    Hsieh, Yun-Da; Kao, Ying-Jer; Sandvik, Anders W

    2013-01-01

    We test an improved finite-size scaling method for reliably extracting the critical temperature T BKT of a Berezinskii–Kosterlitz–Thouless (BKT) transition. Using known single-parameter logarithmic corrections to the spin stiffness ρ s at T BKT in combination with the Kosterlitz–Nelson relation between the transition temperature and the stiffness, ρ s (T BKT ) = 2T BKT /π, we define a size-dependent transition temperature T BKT (L 1 ,L 2 ) based on a pair of system sizes L 1 ,L 2 , e.g., L 2 = 2L 1 . We use Monte Carlo data for the standard two-dimensional classical XY model to demonstrate that this quantity is well behaved and can be reliably extrapolated to the thermodynamic limit using the next expected logarithmic correction beyond the ones included in defining T BKT (L 1 ,L 2 ). For the Monte Carlo calculations we use GPU (graphical processing unit) computing to obtain high-precision data for L up to 512. We find that the sub-leading logarithmic corrections have significant effects on the extrapolation. Our result T BKT = 0.8935(1) is several error bars above the previously best estimates of the transition temperature, T BKT ≈ 0.8929. If only the leading log-correction is used, the result is, however, consistent with the lower value, suggesting that previous works have underestimated T BKT because of the neglect of sub-leading logarithms. Our method is easy to implement in practice and should be applicable to generic BKT transitions. (paper)

  2. Understanding scaling laws

    International Nuclear Information System (INIS)

    Lysenko, W.P.

    1986-01-01

    Accelerator scaling laws how they can be generated, and how they are used are discussed. A scaling law is a relation between machine parameters and beam parameters. An alternative point of view is that a scaling law is an imposed relation between the equations of motion and the initial conditions. The relation between the parameters is obtained by requiring the beam to be matched. (A beam is said to be matched if the phase-space distribution function is a function of single-particle invariants of the motion.) Because of this restriction, the number of independent parameters describing the system is reduced. Using simple models for bunched- and unbunched-beam situations. Scaling laws are shown to determine the general behavior of beams in accelerators. Such knowledge is useful in design studies for new machines such as high-brightness linacs. The simple model presented shows much of the same behavior as a more detailed RFQ model

  3. Bending of marble with intrinsic length scales: a gradient theory with surface energy and size effects

    International Nuclear Information System (INIS)

    Vardoulakis, I.; Kourkoulis, S.K.; Exadaktylos, G.

    1998-01-01

    A gradient bending theory is developed based on a strain energy function that includes the classical Bernoulli-Euler term, the shape correction term (microstructural length scale) introduced by Timoshenko, and a term associated with surface energy (micromaterial length scale) accounting for the bending moment gradient effect. It is shown that the last term is capable to interpret the size effect in three-point bending (3PB), namely the decrease of the failure load with decreasing beam length for the same aspect ratio. This theory is used to describe the mechanical behaviour of Dionysos-Pentelikon marble in 3PB. Series of tests with prismatic marble beams of the same aperture but with different lengths were conducted and it was concluded that the present theory predicts well the size effect. (orig.)

  4. The Effect of Different Non-Metallic Inclusions on the Machinability of Steels.

    Science.gov (United States)

    Ånmark, Niclas; Karasev, Andrey; Jönsson, Pär Göran

    2015-02-16

    Considerable research has been conducted over recent decades on the role of non‑metallic inclusions and their link to the machinability of different steels. The present work reviews the mechanisms of steel fractures during different mechanical machining operations and the behavior of various non-metallic inclusions in a cutting zone. More specifically, the effects of composition, size, number and morphology of inclusions on machinability factors (such as cutting tool wear, power consumption, etc .) are discussed and summarized. Finally, some methods for modification of non-metallic inclusions in the liquid steel are considered to obtain a desired balance between mechanical properties and machinability of various steel grades.

  5. Assisting the Tooling and Machining Industry to Become Energy Efficient

    Energy Technology Data Exchange (ETDEWEB)

    Curry, Bennett [Arizona Commerce Authority, Phoenix, AZ (United States)

    2016-12-30

    The Arizona Commerce Authority (ACA) conducted an Innovation in Advanced Manufacturing Grant Competition to support and grow southern and central Arizona’s Aerospace and Defense (A&D) industry and its supply chain. The problem statement for this grant challenge was that many A&D machining processes utilize older generation CNC machine tool technologies that can result an inefficient use of resources – energy, time and materials – compared to the latest state-of-the-art CNC machines. Competitive awards funded projects to develop innovative new tools and technologies that reduce energy consumption for older generation machine tools and foster working relationships between industry small to medium-sized manufacturing enterprises and third-party solution providers. During the 42-month term of this grant, 12 competitive awards were made. Final reports have been included with this submission.

  6. Sleep Management on Multiple Machines for Energy and Flow Time

    DEFF Research Database (Denmark)

    Chan, Sze-Hang; Lam, Tak-Wah; Lee, Lap Kei

    2011-01-01

    In large data centers, determining the right number of operating machines is often non-trivial, especially when the workload is unpredictable. Using too many machines would waste energy, while using too few would affect the performance. This paper extends the traditional study of online flow-time...... scheduling on multiple machines to take sleep management and energy into consideration. Specifically, we study online algorithms that can determine dynamically when and which subset of machines should wake up (or sleep), and how jobs are dispatched and scheduled. We consider schedules whose objective...... is to minimize the sum of flow time and energy, and obtain O(1)-competitive algorithms for two settings: one assumes machines running at a fixed speed, and the other allows dynamic speed scaling to further optimize energy usage. Like the previous work on the tradeoff between flow time and energy, the analysis...

  7. the impact of machine geometries on the average torque of dual ...

    African Journals Online (AJOL)

    HOD

    Keywords: average torque, dual start, machine geometry, optimal value, PM machines. 1. ... permanent magnet length, back-iron size etc. were ..... e (N m. ) Stator tooth width/stator slot pitch. 4. 5. 7. 8. 10. 11. 13. 14. Number of rotor poles. 0. 1. 2. 3. 4. 5. 6. 0. 2. 4. 6. 8. 10. 12. T orq u e (Nm. ) Back-iron thickness (mm). 4. 5. 7.

  8. Reverse engineering of wörner type drilling machine structure.

    Science.gov (United States)

    Wibowo, A.; Belly, I.; llhamsyah, R.; Indrawanto; Yuwana, Y.

    2018-03-01

    A product design needs to be modified based on the conditions of production facilities and existing resource capabilities without reducing the functional aspects of the product itself. This paper describes the reverse engineering process of the main structure of the wörner type drilling machine to obtain a machine structure design that can be made by resources with limited ability by using simple processes. Some structural, functional and the work mechanism analyzes have been performed to understand the function and role of each basic components. The process of dismantling of the drilling machine and measuring each of the basic components was performed to obtain sets of the geometry and size data of each component. The geometric model of each structure components and the machine assembly were built to facilitate the simulation process and machine performance analysis that refers to ISO standard of drilling machine. The tolerance stackup analysis also performed to determine the type and value of geometrical and dimensional tolerances, which could affect the ease of the components to be manufactured and assembled

  9. Determining the Particle Size of Debris from a Tunnel Boring Machine Through Photographic Analysis and Comparison Between Excavation Performance and Rock Mass Properties

    Science.gov (United States)

    Rispoli, A.; Ferrero, A. M.; Cardu, M.; Farinetti, A.

    2017-10-01

    This paper presents the results of a study carried out on a 6.3-m-diameter exploratory tunnel excavated in hard rock by an open tunnel boring machine (TBM). The study provides a methodology, based on photographic analysis, for the evaluation of the particle size distribution of debris produced by the TBM. A number of tests were carried out on the debris collected during the TBM advancement. In order to produce a parameter indicative of the particle size of the debris, the coarseness index (CI) was defined and compared with some parameters representative of the TBM performance [i.e. the excavation specific energy (SE) and field penetration index (FPI)] and rock mass features, such as RMR, GSI, uniaxial compression strength and joint spacing. The results obtained showed a clear trend between the CI and some TBM performance parameters, such as SE and FPI. On the contrary, due to the rock mass fracturing, a clear relationship between the CI and rock mass characteristics was not found.

  10. Machine learning and data science in soft materials engineering

    Science.gov (United States)

    Ferguson, Andrew L.

    2018-01-01

    In many branches of materials science it is now routine to generate data sets of such large size and dimensionality that conventional methods of analysis fail. Paradigms and tools from data science and machine learning can provide scalable approaches to identify and extract trends and patterns within voluminous data sets, perform guided traversals of high-dimensional phase spaces, and furnish data-driven strategies for inverse materials design. This topical review provides an accessible introduction to machine learning tools in the context of soft and biological materials by ‘de-jargonizing’ data science terminology, presenting a taxonomy of machine learning techniques, and surveying the mathematical underpinnings and software implementations of popular tools, including principal component analysis, independent component analysis, diffusion maps, support vector machines, and relative entropy. We present illustrative examples of machine learning applications in soft matter, including inverse design of self-assembling materials, nonlinear learning of protein folding landscapes, high-throughput antimicrobial peptide design, and data-driven materials design engines. We close with an outlook on the challenges and opportunities for the field.

  11. Machine learning and data science in soft materials engineering.

    Science.gov (United States)

    Ferguson, Andrew L

    2018-01-31

    In many branches of materials science it is now routine to generate data sets of such large size and dimensionality that conventional methods of analysis fail. Paradigms and tools from data science and machine learning can provide scalable approaches to identify and extract trends and patterns within voluminous data sets, perform guided traversals of high-dimensional phase spaces, and furnish data-driven strategies for inverse materials design. This topical review provides an accessible introduction to machine learning tools in the context of soft and biological materials by 'de-jargonizing' data science terminology, presenting a taxonomy of machine learning techniques, and surveying the mathematical underpinnings and software implementations of popular tools, including principal component analysis, independent component analysis, diffusion maps, support vector machines, and relative entropy. We present illustrative examples of machine learning applications in soft matter, including inverse design of self-assembling materials, nonlinear learning of protein folding landscapes, high-throughput antimicrobial peptide design, and data-driven materials design engines. We close with an outlook on the challenges and opportunities for the field.

  12. Machine Learning Approaches in Cardiovascular Imaging.

    Science.gov (United States)

    Henglin, Mir; Stein, Gillian; Hushcha, Pavel V; Snoek, Jasper; Wiltschko, Alexander B; Cheng, Susan

    2017-10-01

    Cardiovascular imaging technologies continue to increase in their capacity to capture and store large quantities of data. Modern computational methods, developed in the field of machine learning, offer new approaches to leveraging the growing volume of imaging data available for analyses. Machine learning methods can now address data-related problems ranging from simple analytic queries of existing measurement data to the more complex challenges involved in analyzing raw images. To date, machine learning has been used in 2 broad and highly interconnected areas: automation of tasks that might otherwise be performed by a human and generation of clinically important new knowledge. Most cardiovascular imaging studies have focused on task-oriented problems, but more studies involving algorithms aimed at generating new clinical insights are emerging. Continued expansion in the size and dimensionality of cardiovascular imaging databases is driving strong interest in applying powerful deep learning methods, in particular, to analyze these data. Overall, the most effective approaches will require an investment in the resources needed to appropriately prepare such large data sets for analyses. Notwithstanding current technical and logistical challenges, machine learning and especially deep learning methods have much to offer and will substantially impact the future practice and science of cardiovascular imaging. © 2017 American Heart Association, Inc.

  13. Prediction of Machine Tool Condition Using Support Vector Machine

    International Nuclear Information System (INIS)

    Wang Peigong; Meng Qingfeng; Zhao Jian; Li Junjie; Wang Xiufeng

    2011-01-01

    Condition monitoring and predicting of CNC machine tools are investigated in this paper. Considering the CNC machine tools are often small numbers of samples, a condition predicting method for CNC machine tools based on support vector machines (SVMs) is proposed, then one-step and multi-step condition prediction models are constructed. The support vector machines prediction models are used to predict the trends of working condition of a certain type of CNC worm wheel and gear grinding machine by applying sequence data of vibration signal, which is collected during machine processing. And the relationship between different eigenvalue in CNC vibration signal and machining quality is discussed. The test result shows that the trend of vibration signal Peak-to-peak value in surface normal direction is most relevant to the trend of surface roughness value. In trends prediction of working condition, support vector machine has higher prediction accuracy both in the short term ('One-step') and long term (multi-step) prediction compared to autoregressive (AR) model and the RBF neural network. Experimental results show that it is feasible to apply support vector machine to CNC machine tool condition prediction.

  14. Positional reference system for ultraprecision machining

    International Nuclear Information System (INIS)

    Arnold, J.B.; Burleson, R.R.; Pardue, R.M.

    1982-01-01

    A stable positional reference system for use in improving the cutting tool-to-part contour position in numerical controlledmultiaxis metal turning machines is provided. The reference system employs a plurality of interferometers referenced to orthogonally disposed metering bars which are substantially isolated from machine strain induced position errors for monitoring the part and tool positions relative to the metering bars. A microprocessor-based control system is employed in conjunction with the plurality of position interferometers and part contour description data inputs to calculate error components for each axis of movement and output them to corresponding axis drives with appropriate scaling and error compensation. Real-time position control, operating in combination with the reference system, makes possible the positioning of the cutting points of a tool along a part locus with a substantially greater degree of accuracy than has been attained previously in the art by referencing and then monitoring only the tool motion relative to a reference position located on the machine base

  15. Positional reference system for ultraprecision machining

    Science.gov (United States)

    Arnold, J.B.; Burleson, R.R.; Pardue, R.M.

    1980-09-12

    A stable positional reference system for use in improving the cutting tool-to-part contour position in numerical controlled-multiaxis metal turning machines is provided. The reference system employs a plurality of interferometers referenced to orthogonally disposed metering bars which are substantially isolated from machine strain induced position errors for monitoring the part and tool positions relative to the metering bars. A microprocessor-based control system is employed in conjunction with the plurality of positions interferometers and part contour description data input to calculate error components for each axis of movement and output them to corresponding axis driven with appropriate scaling and error compensation. Real-time position control, operating in combination with the reference system, makes possible the positioning of the cutting points of a tool along a part locus with a substantially greater degree of accuracy than has been attained previously in the art by referencing and then monitoring only the tool motion relative to a reference position located on the machine base.

  16. Machine learning for Big Data analytics in plants.

    Science.gov (United States)

    Ma, Chuang; Zhang, Hao Helen; Wang, Xiangfeng

    2014-12-01

    Rapid advances in high-throughput genomic technology have enabled biology to enter the era of 'Big Data' (large datasets). The plant science community not only needs to build its own Big-Data-compatible parallel computing and data management infrastructures, but also to seek novel analytical paradigms to extract information from the overwhelming amounts of data. Machine learning offers promising computational and analytical solutions for the integrative analysis of large, heterogeneous and unstructured datasets on the Big-Data scale, and is gradually gaining popularity in biology. This review introduces the basic concepts and procedures of machine-learning applications and envisages how machine learning could interface with Big Data technology to facilitate basic research and biotechnology in the plant sciences. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Tensor Network Quantum Virtual Machine (TNQVM)

    Energy Technology Data Exchange (ETDEWEB)

    2016-11-18

    There is a lack of state-of-the-art quantum computing simulation software that scales on heterogeneous systems like Titan. Tensor Network Quantum Virtual Machine (TNQVM) provides a quantum simulator that leverages a distributed network of GPUs to simulate quantum circuits in a manner that leverages recent results from tensor network theory.

  18. Plans for miniature machining at LASL

    International Nuclear Information System (INIS)

    Rhorer, R.L.

    1979-01-01

    A special shop for making miniature or very small parts is being established within the LASL Shop Department, and one of the machine tools for this shop is a high precision lathe. The report describes a method based on scale modeling analysis which was used to define the specific requirements for this lathe

  19. AC Loss Analysis of MgB2-Based Fully Superconducting Machines

    Science.gov (United States)

    Feddersen, M.; Haran, K. S.; Berg, F.

    2017-12-01

    Superconducting electric machines have shown potential for significant increase in power density, making them attractive for size and weight sensitive applications such as offshore wind generation, marine propulsion, and hybrid-electric aircraft propulsion. Superconductors exhibit no loss under dc conditions, though ac current and field produce considerable losses due to hysteresis, eddy currents, and coupling mechanisms. For this reason, many present machines are designed to be partially superconducting, meaning that the dc field components are superconducting while the ac armature coils are conventional conductors. Fully superconducting designs can provide increases in power density with significantly higher armature current; however, a good estimate of ac losses is required to determine the feasibility under the machines intended operating conditions. This paper aims to characterize the expected losses in a fully superconducting machine targeted towards aircraft, based on an actively-shielded, partially superconducting machine from prior work. Various factors are examined such as magnet strength, operating frequency, and machine load to produce a model for the loss in the superconducting components of the machine. This model is then used to optimize the design of the machine for minimal ac loss while maximizing power density. Important observations from the study are discussed.

  20. The dynamic analysis of drum roll lathe for machining of rollers

    Science.gov (United States)

    Qiao, Zheng; Wu, Dongxu; Wang, Bo; Li, Guo; Wang, Huiming; Ding, Fei

    2014-08-01

    An ultra-precision machine tool for machining of the roller has been designed and assembled, and due to the obvious impact which dynamic characteristic of machine tool has on the quality of microstructures on the roller surface, the dynamic characteristic of the existing machine tool is analyzed in this paper, so is the influence of circumstance that a large scale and slender roller is fixed in the machine on dynamic characteristic of the machine tool. At first, finite element model of the machine tool is built and simplified, and based on that, the paper carries on with the finite element mode analysis and gets the natural frequency and shaking type of four steps of the machine tool. According to the above model analysis results, the weak stiffness systems of machine tool can be further improved and the reasonable bandwidth of control system of the machine tool can be designed. In the end, considering the shock which is caused by Z axis as a result of fast positioning frequently to feeding system and cutting tool, transient analysis is conducted by means of ANSYS analysis in this paper. Based on the results of transient analysis, the vibration regularity of key components of machine tool and its impact on cutting process are explored respectively.

  1. Variability of the raindrop size distribution at small spatial scales

    Science.gov (United States)

    Berne, A.; Jaffrain, J.

    2010-12-01

    Because of the interactions between atmospheric turbulence and cloud microphysics, the raindrop size distribution (DSD) is strongly variable in space and time. The spatial variability of the DSD at small spatial scales (below a few km) is not well documented and not well understood, mainly because of a lack of adequate measurements at the appropriate resolutions. A network of 16 disdrometers (Parsivels) has been designed and set up over EPFL campus in Lausanne, Switzerland. This network covers a typical operational weather radar pixel of 1x1 km2. The question of the significance of the variability of the DSD at such small scales is relevant for radar remote sensing of rainfall because the DSD is often assumed to be uniform within a radar sample volume and because the Z-R relationships used to convert the measured radar reflectivity Z into rain rate R are usually derived from point measurements. Thanks to the number of disdrometers, it was possible to quantify the spatial variability of the DSD at the radar pixel scale and to show that it can be significant. In this contribution, we show that the variability of the total drop concentration, of the median volume diameter and of the rain rate are significant, taking into account the sampling uncertainty associated with disdrometer measurements. The influence of this variability on the Z-R relationship can be non-negligible. Finally, the spatial structure of the DSD is quantified using a geostatistical tool, the variogram, and indicates high spatial correlation within a radar pixel.

  2. Environmentally Friendly Machining

    CERN Document Server

    Dixit, U S; Davim, J Paulo

    2012-01-01

    Environment-Friendly Machining provides an in-depth overview of environmentally-friendly machining processes, covering numerous different types of machining in order to identify which practice is the most environmentally sustainable. The book discusses three systems at length: machining with minimal cutting fluid, air-cooled machining and dry machining. Also covered is a way to conserve energy during machining processes, along with useful data and detailed descriptions for developing and utilizing the most efficient modern machining tools. Researchers and engineers looking for sustainable machining solutions will find Environment-Friendly Machining to be a useful volume.

  3. A Multi-scale, Multi-Model, Machine-Learning Solar Forecasting Technology

    Energy Technology Data Exchange (ETDEWEB)

    Hamann, Hendrik F. [IBM, Yorktown Heights, NY (United States). Thomas J. Watson Research Center

    2017-05-31

    The goal of the project was the development and demonstration of a significantly improved solar forecasting technology (short: Watt-sun), which leverages new big data processing technologies and machine-learnt blending between different models and forecast systems. The technology aimed demonstrating major advances in accuracy as measured by existing and new metrics which themselves were developed as part of this project. Finally, the team worked with Independent System Operators (ISOs) and utilities to integrate the forecasts into their operations.

  4. Finite-size scaling theory and quantum hamiltonian Field theory: the transverse Ising model

    International Nuclear Information System (INIS)

    Hamer, C.J.; Barber, M.N.

    1979-01-01

    Exact results for the mass gap, specific heat and susceptibility of the one-dimensional transverse Ising model on a finite lattice are generated by constructing a finite matrix representation of the Hamiltonian using strong-coupling eigenstates. The critical behaviour of the limiting infinite chain is analysed using finite-size scaling theory. In this way, excellent estimates (to within 1/2% accuracy) are found for the critical coupling and the exponents α, ν and γ

  5. Scaling of laser-plasma interactions with laser wavelength and plasma size

    International Nuclear Information System (INIS)

    Max, C.E.; Campbell, E.M.; Mead, W.C.; Kruer, W.L.; Phillion, D.W.; Turner, R.E.; Lasinski, B.F.; Estabrook, K.G.

    1983-01-01

    Plasma size is an important parameter in wavelength-scaling experiments because it determines both the threshold and potential gain for a variety of laser-plasma instabilities. Most experiments to date have of necessity produced relatively small plasmas, due to laser energy and pulse-length limitations. We have discussed in detail three recent Livermore experiments which had large enough plasmas that some instability thresholds were exceeded or approached. Our evidence for Raman scatter, filamentation, and the two-plasmon decay instability needs to be confirmed in experiments which measure several instability signatures simultaneously, and which produce more quantitative information about the local density and temperature profiles than we have today

  6. Scaling of laser-plasma interactions with laser wavelength and plasma size

    Energy Technology Data Exchange (ETDEWEB)

    Max, C.E.; Campbell, E.M.; Mead, W.C.; Kruer, W.L.; Phillion, D.W.; Turner, R.E.; Lasinski, B.F.; Estabrook, K.G.

    1983-01-25

    Plasma size is an important parameter in wavelength-scaling experiments because it determines both the threshold and potential gain for a variety of laser-plasma instabilities. Most experiments to date have of necessity produced relatively small plasmas, due to laser energy and pulse-length limitations. We have discussed in detail three recent Livermore experiments which had large enough plasmas that some instability thresholds were exceeded or approached. Our evidence for Raman scatter, filamentation, and the two-plasmon decay instability needs to be confirmed in experiments which measure several instability signatures simultaneously, and which produce more quantitative information about the local density and temperature profiles than we have today.

  7. A multi-scale PDMS fabrication strategy to bridge the size mismatch between integrated circuits and microfluidics.

    Science.gov (United States)

    Muluneh, Melaku; Issadore, David

    2014-12-07

    In recent years there has been great progress harnessing the small-feature size and programmability of integrated circuits (ICs) for biological applications, by building microfluidics directly on top of ICs. However, a major hurdle to the further development of this technology is the inherent size-mismatch between ICs (~mm) and microfluidic chips (~cm). Increasing the area of the ICs to match the size of the microfluidic chip, as has often been done in previous studies, leads to a waste of valuable space on the IC and an increase in fabrication cost (>100×). To address this challenge, we have developed a three dimensional PDMS chip that can straddle multiple length scales of hybrid IC/microfluidic chips. This approach allows millimeter-scale ICs, with no post-processing, to be integrated into a centimeter-sized PDMS chip. To fabricate this PDMS chip we use a combination of soft-lithography and laser micromachining. Soft lithography was used to define micrometer-scale fluid channels directly on the surface of the IC, allowing fluid to be controlled with high accuracy and brought into close proximity to sensors for highly sensitive measurements. Laser micromachining was used to create ~50 μm vias to connect these molded PDMS channels to a larger PDMS chip, which can connect multiple ICs and house fluid connections to the outside world. To demonstrate the utility of this approach, we built and demonstrated an in-flow magnetic cytometer that consisted of a 5 × 5 cm(2) microfluidic chip that incorporated a commercial 565 × 1145 μm(2) IC with a GMR sensing circuit. We additionally demonstrated the modularity of this approach by building a chip that incorporated two of these GMR chips connected in series.

  8. Small machine tools for small workpieces final report of the DFG priority program 1476

    CERN Document Server

    Sanders, Adam

    2017-01-01

    This contributed volume presents the research results of the program “Small machine tools for small work pieces” (SPP 1476), funded by the German Research Society (DFG). The book contains the final report of the priority program, presenting novel approached for size-adapted, reconfigurable micro machine tools. The target audience primarily comprises research experts and practitioners in the field of micro machine tools, but the book may also be beneficial for graduate students.

  9. Applications of color machine vision in the agricultural and food industries

    Science.gov (United States)

    Zhang, Min; Ludas, Laszlo I.; Morgan, Mark T.; Krutz, Gary W.; Precetti, Cyrille J.

    1999-01-01

    Color is an important factor in Agricultural and the Food Industry. Agricultural or prepared food products are often grade by producers and consumers using color parameters. Color is used to estimate maturity, sort produce for defects, but also perform genetic screenings or make an aesthetic judgement. The task of sorting produce following a color scale is very complex, requires special illumination and training. Also, this task cannot be performed for long durations without fatigue and loss of accuracy. This paper describes a machine vision system designed to perform color classification in real-time. Applications for sorting a variety of agricultural products are included: e.g. seeds, meat, baked goods, plant and wood.FIrst the theory of color classification of agricultural and biological materials is introduced. Then, some tools for classifier development are presented. Finally, the implementation of the algorithm on real-time image processing hardware and example applications for industry is described. This paper also presented an image analysis algorithm and a prototype machine vision system which was developed for industry. This system will automatically locate the surface of some plants using digital camera and predict information such as size, potential value and type of this plant. The algorithm developed will be feasible for real-time identification in an industrial environment.

  10. Fabrication and Characterization of Polymeric Hollow Fiber Membranes with Nano-scale Pore Sizes

    International Nuclear Information System (INIS)

    Amir Mansourizadeh; Ahmad Fauzi Ismail

    2011-01-01

    Porous polyvinylidene fluoride (PVDF) and polysulfide (PSF) hollow fiber membranes were fabricated via a wet spinning method. The membranes were characterized in terms of gas permeability, wetting pressure, overall porosity and water contact angle. The morphology of the membranes was examined by FESEM. From gas permeation test, mean pore sizes of 7.3 and 9.6 nm were obtained for PSF and PVDF membrane, respectively. Using low polymer concentration in the dopes, the membranes demonstrated a relatively high overall porosity of 77 %. From FESEM examination, the PSF membrane presented a denser outer skin layer, which resulted in significantly lower N 2 permeance. Therefore, due to the high hydrophobicity and nano-scale pore sizes of the PVDF membrane, a good wetting pressure of 4.5x10 -5 Pa was achieved. (author)

  11. Scale size and life time of energy conversion regions observed by Cluster in the plasma sheet

    Directory of Open Access Journals (Sweden)

    M. Hamrin

    2009-11-01

    Full Text Available In this article, and in a companion paper by Hamrin et al. (2009 [Occurrence and location of concentrated load and generator regions observed by Cluster in the plasma sheet], we investigate localized energy conversion regions (ECRs in Earth's plasma sheet. From more than 80 Cluster plasma sheet crossings (660 h data at the altitude of about 15–20 RE in the summer and fall of 2001, we have identified 116 Concentrated Load Regions (CLRs and 35 Concentrated Generator Regions (CGRs. By examining variations in the power density, E·J, where E is the electric field and J is the current density obtained by Cluster, we have estimated typical values of the scale size and life time of the CLRs and the CGRs. We find that a majority of the observed ECRs are rather stationary in space, but varying in time. Assuming that the ECRs are cylindrically shaped and equal in size, we conclude that the typical scale size of the ECRs is 2 RE≲ΔSECR≲5 RE. The ECRs hence occupy a significant portion of the mid altitude plasma sheet. Moreover, the CLRs appear to be somewhat larger than the CGRs. The life time of the ECRs are of the order of 1–10 min, consistent with the large scale magnetotail MHD simulations of Birn and Hesse (2005. The life time of the CGRs is somewhat shorter than for the CLRs. On time scales of 1–10 min, we believe that ECRs rise and vanish in significant regions of the plasma sheet, possibly oscillating between load and generator character. It is probable that at least some of the observed ECRs oscillate energy back and forth in the plasma sheet instead of channeling it to the ionosphere.

  12. Small-scale rural bakery; Maaseudun pienleipomo

    Energy Technology Data Exchange (ETDEWEB)

    Alkula, R.; Malin, A.; Reisbacka, A.; Rytkoenen, A.

    1997-12-31

    The purpose of the study was to clarify how running a small-scale bakery can provide a farming enterprise with its primary or secondary source of livelihood. A questionnaire and interviews were conducted to clarify the current situation concerning small-scale rural bakeries. The experimental part of the study looked into different manners of production, devices used in preparing and processing of doughs, and baking of different kinds of pastries in different types of ovens in laboratory conditions. Based on the results obtained, solutions serving as examples were formulated for small-scale bakeries run with various modes and methods of production. Additionally, market reviews were conducted concerning appropriate equipment for small-scale bakeries. Baking for commercial purposes on the farm is still something new as ca. 80 % of the enterprises covered by the study had operated for no more than five years. Many entrepreneurs (ca. 70 %) expressed a need for supplementary knowledge from some field related to baking. Rural bakeries are small-scale operations with one-person enterprises amounting to 69 % and two-person enterprises to 29 %. Women are primarily responsible for baking. On average, the enterprises baked seven different products, but the amounts baked were usually small. In the experimental part of the study, loaves of rye bread were baked using five different types and sizes of oven accommodating 5-22 loaves of rye bread at the one time. The oven type was found not to affect bread structure. The energy consumption for one ovenful varied between 2.4 and 7.0 kWh, i.e. 0.25-0.43 kWh per kilo. When baking rolls (30-140 rolls at a time), the power consumption varied between 1.2 and 3.5 kWh, i.e. 0.32-0.53 kWh per kilo. The other devices included in the comparative study were an upright deep-freezer, a multi-temperature cabinet and a fermenting cabinet. Furthermore, making rolls by hand was compared to using a machine for the same job, and likewise manual

  13. Energy-efficient electrical machines by new materials. Superconductivity in large electrical machines; Energieeffiziente elektrische Maschinen durch neue Materialien. Supraleitung in grossen elektrischen Maschinen

    Energy Technology Data Exchange (ETDEWEB)

    Frauenhofer, Joachim [Siemens, Nuernberg (Germany); Arndt, Tabea; Grundmann, Joern [Siemens, Erlangen (Germany)

    2013-07-01

    The implementation of superconducting materials in high-power electrical machines results in significant advantages regarding efficiency, size and dynamic behavior when compared to conventional machines. The application of HTS (high-temperature superconductors) in electrical machines allows significantly higher power densities to be achieved for synchronous machines. In order to gain experience with the new technology, Siemens carried out a series of development projects. A 400 kW model motor for the verification of a concept for the new technology was followed by a 4000 kV A generator as highspeed machine - as well as a low-speed 4000 kW propeller motor with high torque. The 4000 kVA generator is still employed to carry out long-term tests and to check components. Superconducting machines have significantly lower weight and envelope dimensions compared to conventional machines, and for this reason alone, they utilize resources better. At the same time, operating losses are slashed to about half and the efficiency increases. Beyond this, they set themselves apart as a result of their special features in operation, such as high overload capability, stiff alternating load behavior and low noise. HTS machines provide significant advantages where the reduction of footprint, weight and losses or the improved dynamic behavior results in significant improvements of the overall system. Propeller motors and generators,for ships, offshore plants, in wind turbine and hydroelectric plants and in large power stations are just some examples. HTS machines can therefore play a significant role when it comes to efficiently using resources and energy as well as reducing the CO{sub 2} emissions.

  14. Quantitative assessment of the enamel machinability in tooth preparation with dental diamond burs.

    Science.gov (United States)

    Song, Xiao-Fei; Jin, Chen-Xin; Yin, Ling

    2015-01-01

    Enamel cutting using dental handpieces is a critical process in tooth preparation for dental restorations and treatment but the machinability of enamel is poorly understood. This paper reports on the first quantitative assessment of the enamel machinability using computer-assisted numerical control, high-speed data acquisition, and force sensing systems. The enamel machinability in terms of cutting forces, force ratio, cutting torque, cutting speed and specific cutting energy were characterized in relation to enamel surface orientation, specific material removal rate and diamond bur grit size. The results show that enamel surface orientation, specific material removal rate and diamond bur grit size critically affected the enamel cutting capability. Cutting buccal/lingual surfaces resulted in significantly higher tangential and normal forces, torques and specific energy (pmachinability for clinical dental practice. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Hybrid machining processes perspectives on machining and finishing

    CERN Document Server

    Gupta, Kapil; Laubscher, R F

    2016-01-01

    This book describes various hybrid machining and finishing processes. It gives a critical review of the past work based on them as well as the current trends and research directions. For each hybrid machining process presented, the authors list the method of material removal, machining system, process variables and applications. This book provides a deep understanding of the need, application and mechanism of hybrid machining processes.

  16. Dwell time adjustment for focused ion beam machining

    International Nuclear Information System (INIS)

    Taniguchi, Jun; Satake, Shin-ichi; Oosumi, Takaki; Fukushige, Akihisa; Kogo, Yasuo

    2013-01-01

    Focused ion beam (FIB) machining is potentially useful for micro/nano fabrication of hard brittle materials, because the removal method involves physical sputtering. Usually, micro/nano scale patterning of hard brittle materials is very difficult to achieve by mechanical polishing or dry etching. Furthermore, in most reported examples, FIB machining has been applied to silicon substrates in a limited range of shapes. Therefore, a versatile method for FIB machining is required. We previously established the dwell time adjustment for mechanical polishing. The dwell time adjustment is calculated by using a convolution model derived from Preston’s hypothesis. More specifically, the target removal shape is a convolution of the unit removal shape, and the dwell time is calculated by means of one of four algorithms. We investigate these algorithms for dwell time adjustment in FIB machining, and we found that a combination a fast Fourier transform calculation technique and a constraint-type calculation is suitable. By applying this algorithm, we succeeded in machining a spherical lens shape with a diameter of 2.93 μm and a depth of 203 nm in a glassy carbon substrate by means of FIB with dwell time adjustment

  17. Design and fabrication of a cassava peeling machine | Akintunde ...

    African Journals Online (AJOL)

    Design and fabrication of a cassava peeling machine. ... Journal Home > Vol 23, No 1 (2005) > ... The varying shapes and sizes of cassava tubers have made cassava peeling to be one of the major problems in the mechanization of cassava ...

  18. Transient time of an Ising machine based on injection-locked laser network

    International Nuclear Information System (INIS)

    Takata, Kenta; Utsunomiya, Shoko; Yamamoto, Yoshihisa

    2012-01-01

    We numerically study the dynamics and frequency response of the recently proposed Ising machine based on the polarization degrees of freedom of an injection-locked laser network (Utsunomiya et al 2011 Opt. Express 19 18091). We simulate various anti-ferromagnetic Ising problems, including the ones with symmetric Ising and Zeeman coefficients, which enable us to study the problem size up to M = 1000. Transient time, to reach a steady-state polarization configuration after a given Ising problem is mapped onto the system, is inversely proportional to the locking bandwidth and does not scale exponentially with the problem size. In the Fourier analysis with first-order linearization approximation, we find that the cut-off frequency of a system's response is almost identical to the locking bandwidth, which supports the time-domain analysis. It is also shown that the Zeeman term, which is created by the horizontally polarized injection signal from the master laser, serves as an initial driving force on the system and contributes to the transient time in addition to the inverse locking bandwidth. (paper)

  19. The general atomic strand winding machine

    International Nuclear Information System (INIS)

    Matt, P.

    1976-01-01

    In conjunction with the integrated development of their high temperature gas cooled reactors (HTGR), General Atomic of San Diego, USA, also developed a strand winding system for the horizontal prestressing of pressure vessels. The machine lay-out, its capabilities and the test program carried out in the laboratory and on a full scale pressure vessel model are described. (author)

  20. Fault detection and isolation in processes involving induction machines

    Energy Technology Data Exchange (ETDEWEB)

    Zell, K; Medvedev, A [Control Engineering Group, Luleaa University of Technology, Luleaa (Sweden)

    1998-12-31

    A model-based technique for fault detection and isolation in electro-mechanical systems comprising induction machines is introduced. Two coupled state observers, one for the induction machine and another for the mechanical load, are used to detect and recognize fault-specific behaviors (fault signatures) from the real-time measurements of the rotor angular velocity and terminal voltages and currents. Practical applicability of the method is verified in full-scale experiments with a conveyor belt drive at SSAB, Luleaa Works. (orig.) 3 refs.

  1. Fault detection and isolation in processes involving induction machines

    Energy Technology Data Exchange (ETDEWEB)

    Zell, K.; Medvedev, A. [Control Engineering Group, Luleaa University of Technology, Luleaa (Sweden)

    1997-12-31

    A model-based technique for fault detection and isolation in electro-mechanical systems comprising induction machines is introduced. Two coupled state observers, one for the induction machine and another for the mechanical load, are used to detect and recognize fault-specific behaviors (fault signatures) from the real-time measurements of the rotor angular velocity and terminal voltages and currents. Practical applicability of the method is verified in full-scale experiments with a conveyor belt drive at SSAB, Luleaa Works. (orig.) 3 refs.

  2. Effects of the application of different particle sizes of mill scale (residue) in mass red ceramic

    International Nuclear Information System (INIS)

    Arnt, A.B.C.; Rocha, M.R.; Meller, J.G.

    2012-01-01

    This study aims to evaluate the influence of particle size of mill scale, residue, when added to a mass ceramic. This residue rich in iron oxide may be used as pigment in the ceramics industry. The use of pigments in ceramic products is related to the characteristics of non-toxicity, chemical stability and determination of tone. The tendency to solubilize the pigment depends on the specific surface area. The residue study was initially subjected to physical and chemical characterization and added in a proportion of 5% at a commercial ceramic white burning, with different particle sizes. Both formulations were sintered at a temperature of 950 ° C and evaluated for: loss on ignition, firing linear shrinkage, water absorption, flexural strength and difference of tone. Samples with finer particles of mill scale 0.038 μ showed higher mechanical strength values in the order of 18 MPa. (author)

  3. Optimal unit sizing for small-scale integrated energy systems using multi-objective interval optimization and evidential reasoning approach

    International Nuclear Information System (INIS)

    Wei, F.; Wu, Q.H.; Jing, Z.X.; Chen, J.J.; Zhou, X.X.

    2016-01-01

    This paper proposes a comprehensive framework including a multi-objective interval optimization model and evidential reasoning (ER) approach to solve the unit sizing problem of small-scale integrated energy systems, with uncertain wind and solar energies integrated. In the multi-objective interval optimization model, interval variables are introduced to tackle the uncertainties of the optimization problem. Aiming at simultaneously considering the cost and risk of a business investment, the average and deviation of life cycle cost (LCC) of the integrated energy system are formulated. In order to solve the problem, a novel multi-objective optimization algorithm, MGSOACC (multi-objective group search optimizer with adaptive covariance matrix and chaotic search), is developed, employing adaptive covariance matrix to make the search strategy adaptive and applying chaotic search to maintain the diversity of group. Furthermore, ER approach is applied to deal with multiple interests of an investor at the business decision making stage and to determine the final unit sizing solution from the Pareto-optimal solutions. This paper reports on the simulation results obtained using a small-scale direct district heating system (DH) and a small-scale district heating and cooling system (DHC) optimized by the proposed framework. The results demonstrate the superiority of the multi-objective interval optimization model and ER approach in tackling the unit sizing problem of integrated energy systems considering the integration of uncertian wind and solar energies. - Highlights: • Cost and risk of investment in small-scale integrated energy systems are considered. • A multi-objective interval optimization model is presented. • A novel multi-objective optimization algorithm (MGSOACC) is proposed. • The evidential reasoning (ER) approach is used to obtain the final optimal solution. • The MGSOACC and ER can tackle the unit sizing problem efficiently.

  4. Toward industrial scale synthesis of ultrapure singlet nanoparticles with controllable sizes in a continuous gas-phase process

    Science.gov (United States)

    Feng, Jicheng; Biskos, George; Schmidt-Ott, Andreas

    2015-10-01

    Continuous gas-phase synthesis of nanoparticles is associated with rapid agglomeration, which can be a limiting factor for numerous applications. In this report, we challenge this paradigm by providing experimental evidence to support that gas-phase methods can be used to produce ultrapure non-agglomerated “singlet” nanoparticles having tunable sizes at room temperature. By controlling the temperature in the particle growth zone to guarantee complete coalescence of colliding entities, the size of singlets in principle can be regulated from that of single atoms to any desired value. We assess our results in the context of a simple analytical model to explore the dependence of singlet size on the operating conditions. Agreement of the model with experimental measurements shows that these methods can be effectively used for producing singlets that can be processed further by many alternative approaches. Combined with the capabilities of up-scaling and unlimited mixing that spark ablation enables, this study provides an easy-to-use concept for producing the key building blocks for low-cost industrial-scale nanofabrication of advanced materials.

  5. Meter-scale Urban Land Cover Mapping for EPA EnviroAtlas Using Machine Learning and OBIA Remote Sensing Techniques

    Science.gov (United States)

    Pilant, A. N.; Baynes, J.; Dannenberg, M.; Riegel, J.; Rudder, C.; Endres, K.

    2013-12-01

    US EPA EnviroAtlas is an online collection of tools and resources that provides geospatial data, maps, research, and analysis on the relationships between nature, people, health, and the economy (http://www.epa.gov/research/enviroatlas/index.htm). Using EnviroAtlas, you can see and explore information related to the benefits (e.g., ecosystem services) that humans receive from nature, including clean air, clean and plentiful water, natural hazard mitigation, biodiversity conservation, food, fuel, and materials, recreational opportunities, and cultural and aesthetic value. EPA developed several urban land cover maps at very high spatial resolution (one-meter pixel size) for a portion of EnviroAtlas devoted to urban studies. This urban mapping effort supported analysis of relations among land cover, human health and demographics at the US Census Block Group level. Supervised classification of 2010 USDA NAIP (National Agricultural Imagery Program) digital aerial photos produced eight-class land cover maps for several cities, including Durham, NC, Portland, ME, Tampa, FL, New Bedford, MA, Pittsburgh, PA, Portland, OR, and Milwaukee, WI. Semi-automated feature extraction methods were used to classify the NAIP imagery: genetic algorithms/machine learning, random forest, and object-based image analysis (OBIA). In this presentation we describe the image processing and fuzzy accuracy assessment methods used, and report on some sustainability and ecosystem service metrics computed using this land cover as input (e.g., carbon sequestration from USFS iTREE model; health and demographics in relation to road buffer forest width). We also discuss the land cover classification schema (a modified Anderson Level 1 after the National Land Cover Data (NLCD)), and offer some observations on lessons learned. Meter-scale urban land cover in Portland, OR overlaid on NAIP aerial photo. Streets, buildings and individual trees are identifiable.

  6. Automatic inspection of textured surfaces by support vector machines

    Science.gov (United States)

    Jahanbin, Sina; Bovik, Alan C.; Pérez, Eduardo; Nair, Dinesh

    2009-08-01

    Automatic inspection of manufactured products with natural looking textures is a challenging task. Products such as tiles, textile, leather, and lumber project image textures that cannot be modeled as periodic or otherwise regular; therefore, a stochastic modeling of local intensity distribution is required. An inspection system to replace human inspectors should be flexible in detecting flaws such as scratches, cracks, and stains occurring in various shapes and sizes that have never been seen before. A computer vision algorithm is proposed in this paper that extracts local statistical features from grey-level texture images decomposed with wavelet frames into subbands of various orientations and scales. The local features extracted are second order statistics derived from grey-level co-occurrence matrices. Subsequently, a support vector machine (SVM) classifier is trained to learn a general description of normal texture from defect-free samples. This algorithm is implemented in LabVIEW and is capable of processing natural texture images in real-time.

  7. Recycling of mill scale in sintering process

    Directory of Open Access Journals (Sweden)

    El-Hussiny N.A.

    2011-01-01

    Full Text Available This investigation deals with the effect of replacing some amount of Baharia high barite iron ore concentrate by mill scale waste which was characterized by high iron oxide content on the parameters of the sintering process., and investigation the effect of different amount of coke breeze added on sintering process parameters when using 5% mill scale waste with 95% iron ore concentrate. The results of this work show that, replacement of iron ore concentrate with mill scale increases the amount of ready made sinter, sinter strength and productivity of the sinter machine and productivity at blast furnace yard. Also, the increase of coke breeze leads to an increase the ready made sinter and productivity of the sintering machine at blast furnace yard. The productivity of the sintering machine after 5% decreased slightly due to the decrease of vertical velocity.

  8. Economies of scale and optimal size of hospitals: Empirical results for Danish public hospitals

    DEFF Research Database (Denmark)

    Kristensen, Troels

    number of beds per hospital is estimated to be 275 beds per site. Sensitivity analysis to partial changes in model parameters yields a joint 95% confidence interval in the range 130 - 585 beds per site. Conclusions: The results indicate that it may be appropriate to consolidate the production of small...... the current configuration of Danish hospitals is subject to scale economies that may justify such plans and to estimate an optimal hospital size. Methods: We estimate cost functions using panel data on total costs, DRG-weighted casemix, and number : We estimate cost functions using panel data on total costs......, DRG-weighted casemix, and number of beds for three years from 2004-2006. A short-run cost function is used to derive estimates of long-run scale economies by applying the envelope condition. Results: We identify moderate to significant long-run economies of scale when applying two alternative We...

  9. Dependence of exponents on text length versus finite-size scaling for word-frequency distributions

    Science.gov (United States)

    Corral, Álvaro; Font-Clos, Francesc

    2017-08-01

    Some authors have recently argued that a finite-size scaling law for the text-length dependence of word-frequency distributions cannot be conceptually valid. Here we give solid quantitative evidence for the validity of this scaling law, using both careful statistical tests and analytical arguments based on the generalized central-limit theorem applied to the moments of the distribution (and obtaining a novel derivation of Heaps' law as a by-product). We also find that the picture of word-frequency distributions with power-law exponents that decrease with text length [X. Yan and P. Minnhagen, Physica A 444, 828 (2016), 10.1016/j.physa.2015.10.082] does not stand with rigorous statistical analysis. Instead, we show that the distributions are perfectly described by power-law tails with stable exponents, whose values are close to 2, in agreement with the classical Zipf's law. Some misconceptions about scaling are also clarified.

  10. Surface quality analysis of die steels in powder-mixed electrical discharge machining using titan powder in fine machining

    Directory of Open Access Journals (Sweden)

    Banh Tien Long

    2016-06-01

    Full Text Available Improving the quality of surface molds after electrical discharge machining is still being considered by many researchers. Powder-mixed dielectric in electrical discharge machining showed that it is one of the processing methods with high efficiency. This article reports on the results of surface quality of mold steels after powder-mixed electrical discharge machining using titanium powder in fine machining. The process parameters such as electrode material, workpiece material, electrode polarity, pulse on-time, pulse off-time, current, and titanium powder concentration were considered in the research. These materials are most commonly used with die-sinking electrical discharge machining in the manufacture of molds and has been selected as the subject of research: workpiece materials were SKD61, SKT4, and SKD11 mold steels, and electrode materials were copper and graphite. Taguchi’s method is used to design experiments. The influence of the parameters on surface roughness was evaluated through the average value and ratio (S/N. Results showed that the parameters such as electrical current, electrode material, pulse on-time, electrode polarity, and interaction between the electrode materials with concentration powder mostly influence surface roughness and surface roughness at optimal parameters SRopt = 1.73 ± 0.39 µm. Analysis of the surface layer after powder-mixed electrical discharge machining using titanium powder in optimal conditions has shown that the white layer with more uniform thickness and increased hardness (≈861.0 HV, and amount and size of microscopic cracks, is reduced. This significantly leads to the increase in the quality of the surface layer.

  11. High-speed micro-electro-discharge machining.

    Energy Technology Data Exchange (ETDEWEB)

    Chandrasekar, Srinivasan Dr. (.School of Industrial Engineering, West Lafayette, IN); Moylan, Shawn P. (School of Industrial Engineering, West Lafayette, IN); Benavides, Gilbert Lawrence

    2005-09-01

    When two electrodes are in close proximity in a dielectric liquid, application of a voltage pulse can produce a spark discharge between them, resulting in a small amount of material removal from both electrodes. Pulsed application of the voltage at discharge energies in the range of micro-Joules results in the continuous material removal process known as micro-electro-discharge machining (micro-EDM). Spark erosion by micro-EDM provides significant opportunities for producing small features and micro-components such as nozzle holes, slots, shafts and gears in virtually any conductive material. If the speed and precision of micro-EDM processes can be significantly enhanced, then they have the potential to be used for a wide variety of micro-machining applications including fabrication of microelectromechanical system (MEMS) components. Toward this end, a better understanding of the impacts the various machining parameters have on material removal has been established through a single discharge study of micro-EDM and a parametric study of small hole making by micro-EDM. The main avenues for improving the speed and efficiency of the micro-EDM process are in the areas of more controlled pulse generation in the power supply and more controlled positioning of the tool electrode during the machining process. Further investigation of the micro-EDM process in three dimensions leads to important design rules, specifically the smallest feature size attainable by the process.

  12. Integrating Heuristic and Machine-Learning Methods for Efficient Virtual Machine Allocation in Data Centers

    OpenAIRE

    Pahlevan, Ali; Qu, Xiaoyu; Zapater Sancho, Marina; Atienza Alonso, David

    2017-01-01

    Modern cloud data centers (DCs) need to tackle efficiently the increasing demand for computing resources and address the energy efficiency challenge. Therefore, it is essential to develop resource provisioning policies that are aware of virtual machine (VM) characteristics, such as CPU utilization and data communication, and applicable in dynamic scenarios. Traditional approaches fall short in terms of flexibility and applicability for large-scale DC scenarios. In this paper we propose a heur...

  13. Development of a New Punch Head Shape to Replicate Scale-Up Issues on a Laboratory Tablet Press III: Replicating sticking phenomenon using the SAS punch and evaluation by checking the tablet surface using 3D laser scanning microscope.

    Science.gov (United States)

    Ito, Manabu; Aoki, Shigeru; Uchiyama, Jumpei; Yamato, Keisuke

    2018-04-20

    Sticking is a common observation in the scale-up stage on the punch tip using a commercial tableting machine. The difference in the total compression time between a laboratory and a commercial tableting machine is considered one of the main root causes of scale up issues in the tableting processes. The proposed Size Adjusted for Scale-up (SAS) punch can be used to adjust the consolidation and dwell times for commercial tableting machine. As a result, the sticking phenomenon is able to be replicated at the pilot scale stage. As reported in this paper, the quantification of sticking was measured using a 3D laser scanning microscope to check the tablet surface. It was shown that the sticking area decreased with the addition of magnesium stearate in the formulation, but the sticking depth was not affected by the additional amount of magnesium stearate. It is proposed that use of a 3D laser scanning microscope can be applied to evaluate sticking as a process analytical technology (PAT) tool and so sticking can be monitored continuously without stopping the machine. Copyright © 2018. Published by Elsevier Inc.

  14. Gene selection and classification for cancer microarray data based on machine learning and similarity measures

    Directory of Open Access Journals (Sweden)

    Liu Qingzhong

    2011-12-01

    Full Text Available Abstract Background Microarray data have a high dimension of variables and a small sample size. In microarray data analyses, two important issues are how to choose genes, which provide reliable and good prediction for disease status, and how to determine the final gene set that is best for classification. Associations among genetic markers mean one can exploit information redundancy to potentially reduce classification cost in terms of time and money. Results To deal with redundant information and improve classification, we propose a gene selection method, Recursive Feature Addition, which combines supervised learning and statistical similarity measures. To determine the final optimal gene set for prediction and classification, we propose an algorithm, Lagging Prediction Peephole Optimization. By using six benchmark microarray gene expression data sets, we compared Recursive Feature Addition with recently developed gene selection methods: Support Vector Machine Recursive Feature Elimination, Leave-One-Out Calculation Sequential Forward Selection and several others. Conclusions On average, with the use of popular learning machines including Nearest Mean Scaled Classifier, Support Vector Machine, Naive Bayes Classifier and Random Forest, Recursive Feature Addition outperformed other methods. Our studies also showed that Lagging Prediction Peephole Optimization is superior to random strategy; Recursive Feature Addition with Lagging Prediction Peephole Optimization obtained better testing accuracies than the gene selection method varSelRF.

  15. Improving the reliability of stator insulation system in rotating machines

    International Nuclear Information System (INIS)

    Gupta, G.K.; Sedding, H.G.; Culbert, I.M.

    1997-01-01

    Reliable performance of rotating machines, especially generators and primary heat transport pump motors, is critical to the efficient operation on nuclear stations. A significant number of premature machine failures have been attributed to the stator insulation problems. Ontario Hydro has attempted to assure the long term reliability of the insulation system in critical rotating machines through proper specifications and quality assurance tests for new machines and periodic on-line and off-line diagnostic tests on machines in service. The experience gained over the last twenty years is presented in this paper. Functional specifications have been developed for the insulation system in critical rotating machines based on engineering considerations and our past experience. These specifications include insulation stress, insulation resistance and polarization index, partial discharge levels, dissipation factor and tip up, AC and DC hipot tests. Voltage endurance tests are specified for groundwall insulation system of full size production coils and bars. For machines with multi-turn coils, turn insulation strength for fast fronted surges in specified and verified through tests on all coils in the factory and on samples of finished coils in the laboratory. Periodic on-line and off-line diagnostic tests were performed to assess the condition of the stator insulation system in machines in service. Partial discharges are measured on-line using several techniques to detect any excessive degradation of the insulation system in critical machines. Novel sensors have been developed and installed in several machines to facilitate measurements of partial discharges on operating machines. Several off-line tests are performed either to confirm the problems indicated by the on-line test or to assess the insulation system in machines which cannot be easily tested on-line. Experience with these tests, including their capabilities and limitations, are presented. (author)

  16. Molecular machines open cell membranes.

    Science.gov (United States)

    García-López, Víctor; Chen, Fang; Nilewski, Lizanne G; Duret, Guillaume; Aliyan, Amir; Kolomeisky, Anatoly B; Robinson, Jacob T; Wang, Gufeng; Pal, Robert; Tour, James M

    2017-08-30

    Beyond the more common chemical delivery strategies, several physical techniques are used to open the lipid bilayers of cellular membranes. These include using electric and magnetic fields, temperature, ultrasound or light to introduce compounds into cells, to release molecular species from cells or to selectively induce programmed cell death (apoptosis) or uncontrolled cell death (necrosis). More recently, molecular motors and switches that can change their conformation in a controlled manner in response to external stimuli have been used to produce mechanical actions on tissue for biomedical applications. Here we show that molecular machines can drill through cellular bilayers using their molecular-scale actuation, specifically nanomechanical action. Upon physical adsorption of the molecular motors onto lipid bilayers and subsequent activation of the motors using ultraviolet light, holes are drilled in the cell membranes. We designed molecular motors and complementary experimental protocols that use nanomechanical action to induce the diffusion of chemical species out of synthetic vesicles, to enhance the diffusion of traceable molecular machines into and within live cells, to induce necrosis and to introduce chemical species into live cells. We also show that, by using molecular machines that bear short peptide addends, nanomechanical action can selectively target specific cell-surface recognition sites. Beyond the in vitro applications demonstrated here, we expect that molecular machines could also be used in vivo, especially as their design progresses to allow two-photon, near-infrared and radio-frequency activation.

  17. Molecular machines open cell membranes

    Science.gov (United States)

    García-López, Víctor; Chen, Fang; Nilewski, Lizanne G.; Duret, Guillaume; Aliyan, Amir; Kolomeisky, Anatoly B.; Robinson, Jacob T.; Wang, Gufeng; Pal, Robert; Tour, James M.

    2017-08-01

    Beyond the more common chemical delivery strategies, several physical techniques are used to open the lipid bilayers of cellular membranes. These include using electric and magnetic fields, temperature, ultrasound or light to introduce compounds into cells, to release molecular species from cells or to selectively induce programmed cell death (apoptosis) or uncontrolled cell death (necrosis). More recently, molecular motors and switches that can change their conformation in a controlled manner in response to external stimuli have been used to produce mechanical actions on tissue for biomedical applications. Here we show that molecular machines can drill through cellular bilayers using their molecular-scale actuation, specifically nanomechanical action. Upon physical adsorption of the molecular motors onto lipid bilayers and subsequent activation of the motors using ultraviolet light, holes are drilled in the cell membranes. We designed molecular motors and complementary experimental protocols that use nanomechanical action to induce the diffusion of chemical species out of synthetic vesicles, to enhance the diffusion of traceable molecular machines into and within live cells, to induce necrosis and to introduce chemical species into live cells. We also show that, by using molecular machines that bear short peptide addends, nanomechanical action can selectively target specific cell-surface recognition sites. Beyond the in vitro applications demonstrated here, we expect that molecular machines could also be used in vivo, especially as their design progresses to allow two-photon, near-infrared and radio-frequency activation.

  18. Probabilistic finite-size transport models for fusion: Anomalous transport and scaling laws

    International Nuclear Information System (INIS)

    Milligen, B.Ph. van; Sanchez, R.; Carreras, B.A.

    2004-01-01

    Transport in fusion plasmas in the low confinement mode is characterized by several remarkable properties: the anomalous scaling of transport with system size, stiff (or 'canonical') profiles, power degradation, and rapid transport phenomena. The present article explores the possibilities of constructing a unified transport model, based on the continuous-time random walk, in which all these phenomena are handled adequately. The resulting formalism appears to be sufficiently general to provide a sound starting point for the development of a full-blown plasma transport code, capable of incorporating the relevant microscopic transport mechanisms, and allowing predictions of confinement properties

  19. Predicting Solar Activity Using Machine-Learning Methods

    Science.gov (United States)

    Bobra, M.

    2017-12-01

    Of all the activity observed on the Sun, two of the most energetic events are flares and coronal mass ejections. However, we do not, as of yet, fully understand the physical mechanism that triggers solar eruptions. A machine-learning algorithm, which is favorable in cases where the amount of data is large, is one way to [1] empirically determine the signatures of this mechanism in solar image data and [2] use them to predict solar activity. In this talk, we discuss the application of various machine learning algorithms - specifically, a Support Vector Machine, a sparse linear regression (Lasso), and Convolutional Neural Network - to image data from the photosphere, chromosphere, transition region, and corona taken by instruments aboard the Solar Dynamics Observatory in order to predict solar activity on a variety of time scales. Such an approach may be useful since, at the present time, there are no physical models of flares available for real-time prediction. We discuss our results (Bobra and Couvidat, 2015; Bobra and Ilonidis, 2016; Jonas et al., 2017) as well as other attempts to predict flares using machine-learning (e.g. Ahmed et al., 2013; Nishizuka et al. 2017) and compare these results with the more traditional techniques used by the NOAA Space Weather Prediction Center (Crown, 2012). We also discuss some of the challenges in using machine-learning algorithms for space science applications.

  20. Design Methodology of a Brushless IPM Machine for a Zero Speed Injection Based Sensorless Control

    OpenAIRE

    Godbehere, Jonathan; Wrobel, Rafal; Drury, David; Mellor, Phil

    2015-01-01

    In this paper a design approach for a sensorless controlled, brushless, interior permanent magnet machine is attained. An initial study based on established electrical machine formulas provides the machine’s basic geometrical sizing. The next design stage combines a particle swarm optimisation (PSO) search routine with a magneto-static finite element (FE) solver to provide a more in depth optimisation. The optimisation system has been formulated to derive alternative machine design variants, ...

  1. Law machines: scale models, forensic materiality and the making of modern patent law.

    Science.gov (United States)

    Pottage, Alain

    2011-10-01

    Early US patent law was machine made. Before the Patent Office took on the function of examining patent applications in 1836, questions of novelty and priority were determined in court, within the forum of the infringement action. And at all levels of litigation, from the circuit courts up to the Supreme Court, working models were the media through which doctrine, evidence and argument were made legible, communicated and interpreted. A model could be set on a table, pointed at, picked up, rotated or upended so as to display a point of interest to a particular audience within the courtroom, and, crucially, set in motion to reveal the 'mode of operation' of a machine. The immediate object of demonstration was to distinguish the intangible invention from its tangible embodiment, but models also'machined' patent law itself. Demonstrations of patent claims with models articulated and resolved a set of conceptual tensions that still make the definition and apprehension of the invention difficult, even today, but they resolved these tensions in the register of materiality, performativity and visibility, rather than the register of conceptuality. The story of models tells us something about how inventions emerge and subsist within the context of patent litigation and patent doctrine, and it offers a starting point for renewed reflection on the question of how technology becomes property.

  2. Machine terms dictionary

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1979-04-15

    This book gives descriptions of machine terms which includes machine design, drawing, the method of machine, machine tools, machine materials, automobile, measuring and controlling, electricity, basic of electron, information technology, quality assurance, Auto CAD and FA terms and important formula of mechanical engineering.

  3. Trends in size of tropical deforestation events signal increasing dominance of industrial-scale drivers

    Science.gov (United States)

    Austin, Kemen G.; González-Roglich, Mariano; Schaffer-Smith, Danica; Schwantes, Amanda M.; Swenson, Jennifer J.

    2017-05-01

    Deforestation continues across the tropics at alarming rates, with repercussions for ecosystem processes, carbon storage and long term sustainability. Taking advantage of recent fine-scale measurement of deforestation, this analysis aims to improve our understanding of the scale of deforestation drivers in the tropics. We examined trends in forest clearings of different sizes from 2000-2012 by country, region and development level. As tropical deforestation increased from approximately 6900 kha yr-1 in the first half of the study period, to >7900 kha yr-1 in the second half of the study period, >50% of this increase was attributable to the proliferation of medium and large clearings (>10 ha). This trend was most pronounced in Southeast Asia and in South America. Outside of Brazil >60% of the observed increase in deforestation in South America was due to an upsurge in medium- and large-scale clearings; Brazil had a divergent trend of decreasing deforestation, >90% of which was attributable to a reduction in medium and large clearings. The emerging prominence of large-scale drivers of forest loss in many regions and countries suggests the growing need for policy interventions which target industrial-scale agricultural commodity producers. The experience in Brazil suggests that there are promising policy solutions to mitigate large-scale deforestation, but that these policy initiatives do not adequately address small-scale drivers. By providing up-to-date and spatially explicit information on the scale of deforestation, and the trends in these patterns over time, this study contributes valuable information for monitoring, and designing effective interventions to address deforestation.

  4. Stability Assessment of a System Comprising a Single Machine and Inverter with Scalable Ratings

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Brian B [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Lin, Yashen [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gevorgian, Vahan [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Purba, Victor [University of Minnesota; Dhople, Sairaj [University of Minnesota

    2017-09-28

    From the inception of power systems, synchronous machines have acted as the foundation of large-scale electrical infrastructures and their physical properties have formed the cornerstone of system operations. However, power electronics interfaces are playing a growing role as they are the primary interface for several types of renewable energy sources and storage technologies. As the role of power electronics in systems continues to grow, it is crucial to investigate the properties of bulk power systems in low inertia settings. In this paper, we assess the properties of coupled machine-inverter systems by studying an elementary system comprised of a synchronous generator, three-phase inverter, and a load. Furthermore, the inverter model is formulated such that its power rating can be scaled continuously across power levels while preserving its closed-loop response. Accordingly, the properties of the machine-inverter system can be assessed for varying ratios of machine-to-inverter power ratings and, hence, differing levels of inertia. After linearizing the model and assessing its eigenvalues, we show that system stability is highly dependent on the interaction between the inverter current controller and machine exciter, thus uncovering a key concern with mixed machine-inverter systems and motivating the need for next-generation grid-stabilizing inverter controls.

  5. Engineered Surface Properties of Porous Tungsten from Cryogenic Machining

    Science.gov (United States)

    Schoop, Julius Malte

    force, temperature and surface roughness data is developed and used to study the deformation mechanisms of porous tungsten under different machining conditions. It is found that when hmax = hc, ductile mode machining of otherwise highly brittle porous tungsten is possible. The value of hc is approximately the same as the average ligament size of the 80% density porous tungsten workpiece.

  6. Automated nodule location and size estimation using a multi-scale Laplacian of Gaussian filtering approach.

    Science.gov (United States)

    Jirapatnakul, Artit C; Fotin, Sergei V; Reeves, Anthony P; Biancardi, Alberto M; Yankelevitz, David F; Henschke, Claudia I

    2009-01-01

    Estimation of nodule location and size is an important pre-processing step in some nodule segmentation algorithms to determine the size and location of the region of interest. Ideally, such estimation methods will consistently find the same nodule location regardless of where the the seed point (provided either manually or by a nodule detection algorithm) is placed relative to the "true" center of the nodule, and the size should be a reasonable estimate of the true nodule size. We developed a method that estimates nodule location and size using multi-scale Laplacian of Gaussian (LoG) filtering. Nodule candidates near a given seed point are found by searching for blob-like regions with high filter response. The candidates are then pruned according to filter response and location, and the remaining candidates are sorted by size and the largest candidate selected. This method was compared to a previously published template-based method. The methods were evaluated on the basis of stability of the estimated nodule location to changes in the initial seed point and how well the size estimates agreed with volumes determined by a semi-automated nodule segmentation method. The LoG method exhibited better stability to changes in the seed point, with 93% of nodules having the same estimated location even when the seed point was altered, compared to only 52% of nodules for the template-based method. Both methods also showed good agreement with sizes determined by a nodule segmentation method, with an average relative size difference of 5% and -5% for the LoG and template-based methods respectively.

  7. Some relations between quantum Turing machines and Turing machines

    OpenAIRE

    Sicard, Andrés; Vélez, Mario

    1999-01-01

    For quantum Turing machines we present three elements: Its components, its time evolution operator and its local transition function. The components are related with the components of deterministic Turing machines, the time evolution operator is related with the evolution of reversible Turing machines and the local transition function is related with the transition function of probabilistic and reversible Turing machines.

  8. A Study of the Interaction between Batting Cage Baseballs and Pitching Machine

    Directory of Open Access Journals (Sweden)

    Patrick Drane

    2018-02-01

    Full Text Available Batting cage pitching machines are widely used across the sports of baseball and softball for training and recreation purposes. The balls are specifically designed for the machines and for the environment to ensure high durability and typically do not have seams. Polymeric foam balls are widely used in these automated pitching machines for batting practice in a cage environment and are similar in weight and size compared with the regulation balls used in leagues. The primary objective of this paper is to characterize the polymeric balls and their interaction with the pitching machine. The paper will present measured ball properties and measured relationships between various pitching machine parameters such as wheel speed, and the ratio of wheel speeds on the ball exit velocity and rotation. This paper will also characterize some of the effects of wear on the baseballs and wheels from their prolonged use.

  9. Influence of scale-dependent fracture intensity on block size distribution and rock slope failure mechanisms in a DFN framework

    Science.gov (United States)

    Agliardi, Federico; Galletti, Laura; Riva, Federico; Zanchi, Andrea; Crosta, Giovanni B.

    2017-04-01

    An accurate characterization of the geometry and intensity of discontinuities in a rock mass is key to assess block size distribution and degree of freedom. These are the main controls on the magnitude and mechanisms of rock slope instabilities (structurally-controlled, step-path or mass failures) and rock mass strength and deformability. Nevertheless, the use of over-simplified discontinuity characterization approaches, unable to capture the stochastic nature of discontinuity features, often hampers a correct identification of dominant rock mass behaviour. Discrete Fracture Network (DFN) modelling tools have provided new opportunities to overcome these caveats. Nevertheless, their ability to provide a representative picture of reality strongly depends on the quality and scale of field data collection. Here we used DFN modelling with FracmanTM to investigate the influence of fracture intensity, characterized on different scales and with different techniques, on the geometry and size distribution of generated blocks, in a rock slope stability perspective. We focused on a test site near Lecco (Southern Alps, Italy), where 600 m high cliffs in thickly-bedded limestones folded at the slope scale impend on the Lake Como. We characterized the 3D slope geometry by Structure-from-Motion photogrammetry (range: 150-1500m; point cloud density > 50 pts/m2). Since the nature and attributes of discontinuities are controlled by brittle failure processes associated to large-scale folding, we performed a field characterization of meso-structural features (faults and related kinematics, vein and joint associations) in different fold domains. We characterized the discontinuity populations identified by structural geology on different spatial scales ranging from outcrops (field surveys and photo-mapping) to large slope sectors (point cloud and photo-mapping). For each sampling domain, we characterized discontinuity orientation statistics and performed fracture mapping and circular

  10. An Event-Triggered Machine Learning Approach for Accelerometer-Based Fall Detection.

    Science.gov (United States)

    Putra, I Putu Edy Suardiyana; Brusey, James; Gaura, Elena; Vesilo, Rein

    2017-12-22

    The fixed-size non-overlapping sliding window (FNSW) and fixed-size overlapping sliding window (FOSW) approaches are the most commonly used data-segmentation techniques in machine learning-based fall detection using accelerometer sensors. However, these techniques do not segment by fall stages (pre-impact, impact, and post-impact) and thus useful information is lost, which may reduce the detection rate of the classifier. Aligning the segment with the fall stage is difficult, as the segment size varies. We propose an event-triggered machine learning (EvenT-ML) approach that aligns each fall stage so that the characteristic features of the fall stages are more easily recognized. To evaluate our approach, two publicly accessible datasets were used. Classification and regression tree (CART), k -nearest neighbor ( k -NN), logistic regression (LR), and the support vector machine (SVM) were used to train the classifiers. EvenT-ML gives classifier F-scores of 98% for a chest-worn sensor and 92% for a waist-worn sensor, and significantly reduces the computational cost compared with the FNSW- and FOSW-based approaches, with reductions of up to 8-fold and 78-fold, respectively. EvenT-ML achieves a significantly better F-score than existing fall detection approaches. These results indicate that aligning feature segments with fall stages significantly increases the detection rate and reduces the computational cost.

  11. Sample preparation of metal alloys by electric discharge machining

    Science.gov (United States)

    Chapman, G. B., II; Gordon, W. A.

    1976-01-01

    Electric discharge machining was investigated as a noncontaminating method of comminuting alloys for subsequent chemical analysis. Particulate dispersions in water were produced from bulk alloys at a rate of about 5 mg/min by using a commercially available machining instrument. The utility of this approach was demonstrated by results obtained when acidified dispersions were substituted for true acid solutions in an established spectrochemical method. The analysis results were not significantly different for the two sample forms. Particle size measurements and preliminary results from other spectrochemical methods which require direct aspiration of liquid into flame or plasma sources are reported.

  12. Design of instrumentation and software for precise laser machining

    Science.gov (United States)

    Wyszyński, D.; Grabowski, Marcin; Lipiec, Piotr

    2017-10-01

    The paper concerns the design of instrumentation and software for precise laser machining. Application of advanced laser beam manipulation instrumentation enables noticeable improvement of cut quality and material loss. This factors have significant impact on process efficiency and cutting edge quality by means of machined part size and shape accuracy, wall taper, material loss reduction (e.g. diamond) and time effectiveness. The goal can be reached by integration of laser drive, observation and optical measurement system, beam manipulation system and five axis mechanical instrumentation with use of advanced tailored software enabling full laser cutting process control and monitoring.

  13. SAD-Based Stereo Vision Machine on a System-on-Programmable-Chip (SoPC)

    Science.gov (United States)

    Zhang, Xiang; Chen, Zhangwei

    2013-01-01

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels. PMID:23459385

  14. Amplification macroscopique de mouvements nanométriques induits par des machines moléculaires

    OpenAIRE

    Goujon , Antoine

    2016-01-01

    The last twenty years have seen tremendous progresses in the design and synthesis of complex molecular machines, often inspired by the beauty of the machinery found in biological systems. However, amplification of the molecular machines motion over several orders of magnitude above their typical length scale is still an ambitious challenge. This work describes how self-organization of molecular machines or motors allows for the synthesis of materials translating the motions of their component...

  15. Effect of display size on visual attention.

    Science.gov (United States)

    Chen, I-Ping; Liao, Chia-Ning; Yeh, Shih-Hao

    2011-06-01

    Attention plays an important role in the design of human-machine interfaces. However, current knowledge about attention is largely based on data obtained when using devices of moderate display size. With advancement in display technology comes the need for understanding attention behavior over a wider range of viewing sizes. The effect of display size on test participants' visual search performance was studied. The participants (N = 12) performed two types of visual search tasks, that is, parallel and serial search, under three display-size conditions (16 degrees, 32 degrees, and 60 degrees). Serial, but not parallel, search was affected by display size. In the serial task, mean reaction time for detecting a target increased with the display size.

  16. Health-promoting vending machines: evaluation of a pediatric hospital intervention.

    Science.gov (United States)

    Van Hulst, Andraea; Barnett, Tracie A; Déry, Véronique; Côté, Geneviève; Colin, Christine

    2013-01-01

    Taking advantage of a natural experiment made possible by the placement of health-promoting vending machines (HPVMs), we evaluated the impact of the intervention on consumers' attitudes toward and practices with vending machines in a pediatric hospital. Vending machines offering healthy snacks, meals, and beverages were developed to replace four vending machines offering the usual high-energy, low-nutrition fare. A pre- and post-intervention evaluation design was used; data were collected through exit surveys and six-week follow-up telephone surveys among potential vending machine users before (n=293) and after (n=226) placement of HPVMs. Chi-2 statistics were used to compare pre- and post-intervention participants' responses. More than 90% of pre- and post-intervention participants were satisfied with their purchase. Post-intervention participants were more likely to state that nutritional content and appropriateness of portion size were elements that influenced their purchase. Overall, post-intervention participants were more likely than pre-intervention participants to perceive as healthy the options offered by the hospital vending machines. Thirty-three percent of post-intervention participants recalled two or more sources of information integrated in the HPVM concept. No differences were found between pre- and post-intervention participants' readiness to adopt healthy diets. While the HPVM project had challenges as well as strengths, vending machines offering healthy snacks are feasible in hospital settings.

  17. Small Scale Yielding Correction of Constraint Loss in Small Sized Fracture Toughness Test Specimens

    International Nuclear Information System (INIS)

    Kim, Maan Won; Kim, Min Chul; Lee, Bong Sang; Hong, Jun Hwa

    2005-01-01

    Fracture toughness data in the ductile-brittle transition region of ferritic steels show scatter produced by local sampling effects and specimen geometry dependence which results from relaxation in crack tip constraint. The ASTM E1921 provides a standard test method to define the median toughness temperature curve, so called Master Curve, for the material corresponding to a 1T crack front length and also defines a reference temperature, T 0 , at which median toughness value is 100 MPam for a 1T size specimen. The ASTM E1921 procedures assume that high constraint, small scaling yielding (SSY) conditions prevail at fracture along the crack front. Violation of the SSY assumption occurs most often during tests of smaller specimens. Constraint loss in such cases leads to higher toughness values and thus lower T 0 values. When applied to a structure with low constraint geometry, the standard fracture toughness estimates may lead to strongly over-conservative estimates. A lot of efforts have been made to adjust the constraint effect. In this work, we applied a small-scale yielding correction (SSYC) to adjust the constraint loss of 1/3PCVN and PCVN specimens which are relatively smaller than 1T size specimen at the fracture toughness Master Curve test

  18. Support vector machine in machine condition monitoring and fault diagnosis

    Science.gov (United States)

    Widodo, Achmad; Yang, Bo-Suk

    2007-08-01

    Recently, the issue of machine condition monitoring and fault diagnosis as a part of maintenance system became global due to the potential advantages to be gained from reduced maintenance costs, improved productivity and increased machine availability. This paper presents a survey of machine condition monitoring and fault diagnosis using support vector machine (SVM). It attempts to summarize and review the recent research and developments of SVM in machine condition monitoring and diagnosis. Numerous methods have been developed based on intelligent systems such as artificial neural network, fuzzy expert system, condition-based reasoning, random forest, etc. However, the use of SVM for machine condition monitoring and fault diagnosis is still rare. SVM has excellent performance in generalization so it can produce high accuracy in classification for machine condition monitoring and diagnosis. Until 2006, the use of SVM in machine condition monitoring and fault diagnosis is tending to develop towards expertise orientation and problem-oriented domain. Finally, the ability to continually change and obtain a novel idea for machine condition monitoring and fault diagnosis using SVM will be future works.

  19. Finite Size Scaling of Perceptron

    OpenAIRE

    Korutcheva, Elka; Tonchev, N.

    2000-01-01

    We study the first-order transition in the model of a simple perceptron with continuous weights and large, bit finite value of the inputs. Making the analogy with the usual finite-size physical systems, we calculate the shift and the rounding exponents near the transition point. In the case of a general perceptron with larger variety of inputs, the analysis only gives bounds for the exponents.

  20. Research on a Novel Parallel Engraving Machine and its Key Technologies

    Directory of Open Access Journals (Sweden)

    Zhang Shi-hui

    2008-11-01

    Full Text Available In order to compensate the disadvantages of conventional engraving machine and exert the advantages of parallel mechanism, a novel parallel engraving machine is presented and some key technologies are studied in this paper. Mechanism performances are analyzed in terms of the first and the second order influence coefficient matrix firstly. So the sizes of mechanism, which are better for all the performance indices of both kinematics and dynamics, can be confirmed and the restriction due to considering only the first order influence coefficient matrix in the past is broken through. Therefore, the theory basis for designing the mechanism size of novel engraving machine with better performances is provided. In addition, method for tool path planning and control technology for engraving force is also studied in the paper. The proposed algorithm for tool path planning on curved surface can be applied to arbitrary spacial curved surface in theory, control technology for engraving force based on fuzzy neural network(FNN has well adaptability to the changing environment. Research on teleoperation for parallel engraving machine based on B/S architecture resolves the key problems such as control mode, sharing mechanism for multiuser, real-time control for engraving job and real-time transmission for video information. Simulation results further show the feasibility and validity of the proposed methods.

  1. Research on a Novel Parallel Engraving Machine and its Key Technologies

    Directory of Open Access Journals (Sweden)

    Kong Ling-fu

    2004-12-01

    Full Text Available In order to compensate the disadvantages of conventional engraving machine and exert the advantages of parallel mechanism, a novel parallel engraving machine is presented and some key technologies are studied in this paper. Mechanism performances are analyzed in terms of the first and the second order influence coefficient matrix firstly. So the sizes of mechanism, which are better for all the performance indices of both kinematics and dynamics, can be confirmed and the restriction due to considering only the first order influence coefficient matrix in the past is broken through. Therefore, the theory basis for designing the mechanism size of novel engraving machine with better performances is provided. In addition, method for tool path planning and control technology for engraving force is also studied in the paper. The proposed algorithm for tool path planning on curved surface can be applied to arbitrary spacial curved surface in theory, control technology for engraving force based on fuzzy neural network(FNN has well adaptability to the changing environment. Research on teleoperation for parallel engraving machine based on B/S architecture resolves the key problems such as control mode, sharing mechanism for multiuser, real-time control for engraving job and real-time transmission for video information. Simulation results further show the feasibility and validity of the proposed methods.

  2. Scale size and life time of energy conversion regions observed by Cluster in the plasma sheet

    Directory of Open Access Journals (Sweden)

    M. Hamrin

    2009-11-01

    Full Text Available In this article, and in a companion paper by Hamrin et al. (2009 [Occurrence and location of concentrated load and generator regions observed by Cluster in the plasma sheet], we investigate localized energy conversion regions (ECRs in Earth's plasma sheet. From more than 80 Cluster plasma sheet crossings (660 h data at the altitude of about 15–20 RE in the summer and fall of 2001, we have identified 116 Concentrated Load Regions (CLRs and 35 Concentrated Generator Regions (CGRs. By examining variations in the power density, E·J, where E is the electric field and J is the current density obtained by Cluster, we have estimated typical values of the scale size and life time of the CLRs and the CGRs. We find that a majority of the observed ECRs are rather stationary in space, but varying in time. Assuming that the ECRs are cylindrically shaped and equal in size, we conclude that the typical scale size of the ECRs is 2 RE≲ΔSECR≲5 RE. The ECRs hence occupy a significant portion of the mid altitude plasma sheet. Moreover, the CLRs appear to be somewhat larger than the CGRs. The life time of the ECRs are of the order of 1–10 min, consistent with the large scale magnetotail MHD simulations of Birn and Hesse (2005. The life time of the CGRs is somewhat shorter than for the CLRs. On time scales of 1–10 min, we believe that ECRs rise and vanish in significant regions of the plasma sheet, possibly oscillating between load and generator character. It is probable that at least some of the observed ECRs oscillate energy back and forth in the plasma sheet instead of channeling it to the ionosphere.

  3. Machine learning and computer vision approaches for phenotypic profiling.

    Science.gov (United States)

    Grys, Ben T; Lo, Dara S; Sahin, Nil; Kraus, Oren Z; Morris, Quaid; Boone, Charles; Andrews, Brenda J

    2017-01-02

    With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. © 2017 Grys et al.

  4. Permanent Magnet Flux-Switching Machine, Optimal Design and Performance Analysis

    Directory of Open Access Journals (Sweden)

    Liviu Emilian Somesan

    2013-01-01

    Full Text Available In this paper an analytical sizing-design procedure for a typical permanent magnet flux-switching machine (PMFSM with 12 stator and respectively 10 rotor poles is presented. An optimal design, based on Hooke-Jeeves method with the objective functions of maximum torque density, is performed. The results were validated via two dimensions finite element analysis (2D-FEA applied on the optimized structure. The influence of the permanent magnet (PM dimensions and type, respectively of the rotor poles' shape on the machine performance were also studied via 2D-FEA.

  5. Performance Evaluation of Machine Learning Methods for Leaf Area Index Retrieval from Time-Series MODIS Reflectance Data

    Science.gov (United States)

    Wang, Tongtong; Xiao, Zhiqiang; Liu, Zhigang

    2017-01-01

    Leaf area index (LAI) is an important biophysical parameter and the retrieval of LAI from remote sensing data is the only feasible method for generating LAI products at regional and global scales. However, most LAI retrieval methods use satellite observations at a specific time to retrieve LAI. Because of the impacts of clouds and aerosols, the LAI products generated by these methods are spatially incomplete and temporally discontinuous, and thus they cannot meet the needs of practical applications. To generate high-quality LAI products, four machine learning algorithms, including back-propagation neutral network (BPNN), radial basis function networks (RBFNs), general regression neutral networks (GRNNs), and multi-output support vector regression (MSVR) are proposed to retrieve LAI from time-series Moderate Resolution Imaging Spectroradiometer (MODIS) reflectance data in this study and performance of these machine learning algorithms is evaluated. The results demonstrated that GRNNs, RBFNs, and MSVR exhibited low sensitivity to training sample size, whereas BPNN had high sensitivity. The four algorithms performed slightly better with red, near infrared (NIR), and short wave infrared (SWIR) bands than red and NIR bands, and the results were significantly better than those obtained using single band reflectance data (red or NIR). Regardless of band composition, GRNNs performed better than the other three methods. Among the four algorithms, BPNN required the least training time, whereas MSVR needed the most for any sample size. PMID:28045443

  6. A Machine Learning Approach to Estimate Riverbank Geotechnical Parameters from Sediment Particle Size Data

    Science.gov (United States)

    Iwashita, Fabio; Brooks, Andrew; Spencer, John; Borombovits, Daniel; Curwen, Graeme; Olley, Jon

    2015-04-01

    Assessing bank stability using geotechnical models traditionally involves the laborious collection of data on the bank and floodplain stratigraphy, as well as in-situ geotechnical data for each sedimentary unit within a river bank. The application of geotechnical bank stability models are limited to those sites where extensive field data has been collected, where their ability to provide predictions of bank erosion at the reach scale are limited without a very extensive and expensive field data collection program. Some challenges in the construction and application of riverbank erosion and hydraulic numerical models are their one-dimensionality, steady-state requirements, lack of calibration data, and nonuniqueness. Also, numerical models commonly can be too rigid with respect to detecting unexpected features like the onset of trends, non-linear relations, or patterns restricted to sub-samples of a data set. These shortcomings create the need for an alternate modelling approach capable of using available data. The application of the Self-Organizing Maps (SOM) approach is well-suited to the analysis of noisy, sparse, nonlinear, multidimensional, and scale-dependent data. It is a type of unsupervised artificial neural network with hybrid competitive-cooperative learning. In this work we present a method that uses a database of geotechnical data collected at over 100 sites throughout Queensland State, Australia, to develop a modelling approach that enables geotechnical parameters (soil effective cohesion, friction angle, soil erodibility and critical stress) to be derived from sediment particle size data (PSD). The model framework and predicted values were evaluated using two methods, splitting the dataset into training and validation set, and through a Bootstrap approach. The basis of Bootstrap cross-validation is a leave-one-out strategy. This requires leaving one data value out of the training set while creating a new SOM to estimate that missing value based on the

  7. Use of models and mockups in verifying man-machine interfaces

    International Nuclear Information System (INIS)

    Seminara, J.L.

    1985-01-01

    The objective of Human Factors Engineering is to tailor the design of facilities and equipment systems to match the capabilities and limitations of the personnel who will operate and maintain the system. This optimization of the man-machine interface is undertaken to enhance the prospects for safe, reliable, timely, and error-free human performance in meeting system objectives. To ensure the eventual success of a complex man-machine system it is important to systematically and progressively test and verify the adequacy of man-machine interfaces from initial design concepts to system operation. Human factors specialists employ a variety of methods to evaluate the quality of the human-system interface. These methods include: (1) Reviews of two-dimensional drawings using appropriately scaled transparent overlays of personnel spanning the anthropometric range, considering clothing and protective gear encumbrances (2) Use of articulated, scaled, plastic templates or manikins that are overlayed on equipment or facility drawings (3) Development of computerized manikins in computer aided design approaches (4) Use of three-dimensional scale models to better conceptualize work stations, control rooms or maintenance facilities (5) Full or half-scale mockups of system components to evaluate operator/maintainer interfaces (6) Part of full-task dynamic simulation of operator or maintainer tasks and interactive system responses (7) Laboratory and field research to establish human performance capabilities with alternative system design concepts or configurations. Of the design verification methods listed above, this paper will only consider the use of models and mockups in the design process

  8. Bio-inspired wooden actuators for large scale applications.

    Science.gov (United States)

    Rüggeberg, Markus; Burgert, Ingo

    2015-01-01

    Implementing programmable actuation into materials and structures is a major topic in the field of smart materials. In particular the bilayer principle has been employed to develop actuators that respond to various kinds of stimuli. A multitude of small scale applications down to micrometer size have been developed, but up-scaling remains challenging due to either limitations in mechanical stiffness of the material or in the manufacturing processes. Here, we demonstrate the actuation of wooden bilayers in response to changes in relative humidity, making use of the high material stiffness and a good machinability to reach large scale actuation and application. Amplitude and response time of the actuation were measured and can be predicted and controlled by adapting the geometry and the constitution of the bilayers. Field tests in full weathering conditions revealed long-term stability of the actuation. The potential of the concept is shown by a first demonstrator. With the sensor and actuator intrinsically incorporated in the wooden bilayers, the daily change in relative humidity is exploited for an autonomous and solar powered movement of a tracker for solar modules.

  9. Multi-objective component sizing of a power-split plug-in hybrid electric vehicle powertrain using Pareto-based natural optimization machines

    Science.gov (United States)

    Mozaffari, Ahmad; Vajedi, Mahyar; Chehresaz, Maryyeh; Azad, Nasser L.

    2016-03-01

    The urgent need to meet increasingly tight environmental regulations and new fuel economy requirements has motivated system science researchers and automotive engineers to take advantage of emerging computational techniques to further advance hybrid electric vehicle and plug-in hybrid electric vehicle (PHEV) designs. In particular, research has focused on vehicle powertrain system design optimization, to reduce the fuel consumption and total energy cost while improving the vehicle's driving performance. In this work, two different natural optimization machines, namely the synchronous self-learning Pareto strategy and the elitism non-dominated sorting genetic algorithm, are implemented for component sizing of a specific power-split PHEV platform with a Toyota plug-in Prius as the baseline vehicle. To do this, a high-fidelity model of the Toyota plug-in Prius is employed for the numerical experiments using the Autonomie simulation software. Based on the simulation results, it is demonstrated that Pareto-based algorithms can successfully optimize the design parameters of the vehicle powertrain.

  10. Simulations of Quantum Turing Machines by Quantum Multi-Stack Machines

    OpenAIRE

    Qiu, Daowen

    2005-01-01

    As was well known, in classical computation, Turing machines, circuits, multi-stack machines, and multi-counter machines are equivalent, that is, they can simulate each other in polynomial time. In quantum computation, Yao [11] first proved that for any quantum Turing machines $M$, there exists quantum Boolean circuit $(n,t)$-simulating $M$, where $n$ denotes the length of input strings, and $t$ is the number of move steps before machine stopping. However, the simulations of quantum Turing ma...

  11. Optimizing fusion PIC code performance at scale on Cori Phase 2

    Energy Technology Data Exchange (ETDEWEB)

    Koskela, T. S.; Deslippe, J.

    2017-07-23

    In this paper we present the results of optimizing the performance of the gyrokinetic full-f fusion PIC code XGC1 on the Cori Phase Two Knights Landing system. The code has undergone substantial development to enable the use of vector instructions in its most expensive kernels within the NERSC Exascale Science Applications Program. We study the single-node performance of the code on an absolute scale using the roofline methodology to guide optimization efforts. We have obtained 2x speedups in single node performance due to enabling vectorization and performing memory layout optimizations. On multiple nodes, the code is shown to scale well up to 4000 nodes, near half the size of the machine. We discuss some communication bottlenecks that were identified and resolved during the work.

  12. Subgrid-scale scalar flux modelling based on optimal estimation theory and machine-learning procedures

    Science.gov (United States)

    Vollant, A.; Balarac, G.; Corre, C.

    2017-09-01

    New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.

  13. A Wavelet Kernel-Based Primal Twin Support Vector Machine for Economic Development Prediction

    Directory of Open Access Journals (Sweden)

    Fang Su

    2013-01-01

    Full Text Available Economic development forecasting allows planners to choose the right strategies for the future. This study is to propose economic development prediction method based on the wavelet kernel-based primal twin support vector machine algorithm. As gross domestic product (GDP is an important indicator to measure economic development, economic development prediction means GDP prediction in this study. The wavelet kernel-based primal twin support vector machine algorithm can solve two smaller sized quadratic programming problems instead of solving a large one as in the traditional support vector machine algorithm. Economic development data of Anhui province from 1992 to 2009 are used to study the prediction performance of the wavelet kernel-based primal twin support vector machine algorithm. The comparison of mean error of economic development prediction between wavelet kernel-based primal twin support vector machine and traditional support vector machine models trained by the training samples with the 3–5 dimensional input vectors, respectively, is given in this paper. The testing results show that the economic development prediction accuracy of the wavelet kernel-based primal twin support vector machine model is better than that of traditional support vector machine.

  14. Re-establishing filtering capabilities of machined porous beryllium via chemical reduction and cleaning

    International Nuclear Information System (INIS)

    Randall, W.L.

    1975-01-01

    Porous beryllium is furnished in sheets of varying sizes and thickness; it is therefore necessary that it be machined into specified sizes. A chemical reduction and cleaning procedure was devised to remove the disrupted surface, open the sealed pores of the material, and clean entrapped contaminates from the internal structure. Dimensional stability can be closely controlled and material size is of no consequence. (U.S.)

  15. Point card compatible automatic vending machine for canned drink; Point card taio kan jido hanbaiki

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-01-10

    A point card compatible automatic vending machine for canned drinks is developed, which provides drink manufacturers with a powerful tool to acquire selling sites and attract consumers. Since the machine is equipped with a device to handle point cards, regular customers have increased and sales have picked up. A point card issuing device is also installed, and the new machine issues a point card whenever a customer wants. The drink manufacturers are evaluating high of the vending machine because it will contribute to the diffusion of the point card system and because a sales promotion campaign may be conducted through the vending machine for instance by exchanging a fully marked card with a giveaway on the spot. In the future, a bill validator (paper money identifier) will be integrated even with small size machines for the diffusion of point card compatible machines. (translated by NEDO)

  16. How the machine ‘thinks’: Understanding opacity in machine learning algorithms

    Directory of Open Access Journals (Sweden)

    Jenna Burrell

    2016-01-01

    Full Text Available This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These mechanisms of classification all frequently rely on computational algorithms, and in many cases on machine learning algorithms to do this work. In this article, I draw a distinction between three forms of opacity: (1 opacity as intentional corporate or state secrecy, (2 opacity as technical illiteracy, and (3 an opacity that arises from the characteristics of machine learning algorithms and the scale required to apply them usefully. The analysis in this article gets inside the algorithms themselves. I cite existing literatures in computer science, known industry practices (as they are publicly presented, and do some testing and manipulation of code as a form of lightweight code audit. I argue that recognizing the distinct forms of opacity that may be coming into play in a given application is a key to determining which of a variety of technical and non-technical solutions could help to prevent harm.

  17. A macroevolutionary explanation for energy equivalence in the scaling of body size and population density.

    Science.gov (United States)

    Damuth, John

    2007-05-01

    Across a wide array of animal species, mean population densities decline with species body mass such that the rate of energy use of local populations is approximately independent of body size. This "energetic equivalence" is particularly evident when ecological population densities are plotted across several or more orders of magnitude in body mass and is supported by a considerable body of evidence. Nevertheless, interpretation of the data has remained controversial, largely because of the difficulty of explaining the origin and maintenance of such a size-abundance relationship in terms of purely ecological processes. Here I describe results of a simulation model suggesting that an extremely simple mechanism operating over evolutionary time can explain the major features of the empirical data. The model specifies only the size scaling of metabolism and a process where randomly chosen species evolve to take resource energy from other species. This process of energy exchange among particular species is distinct from a random walk of species abundances and creates a situation in which species populations using relatively low amounts of energy at any body size have an elevated extinction risk. Selective extinction of such species rapidly drives size-abundance allometry in faunas toward approximate energetic equivalence and maintains it there.

  18. Machine tool structures

    CERN Document Server

    Koenigsberger, F

    1970-01-01

    Machine Tool Structures, Volume 1 deals with fundamental theories and calculation methods for machine tool structures. Experimental investigations into stiffness are discussed, along with the application of the results to the design of machine tool structures. Topics covered range from static and dynamic stiffness to chatter in metal cutting, stability in machine tools, and deformations of machine tool structures. This volume is divided into three sections and opens with a discussion on stiffness specifications and the effect of stiffness on the behavior of the machine under forced vibration c

  19. Small-Sized Whole-Tree Harvesting in Finland

    Energy Technology Data Exchange (ETDEWEB)

    Kaerhae, Kalle [Metsaeteho Oy, Helsinki (Finland)

    2006-07-15

    In Finland, there are two mechanized harvesting systems used for small diameter (d{sub 1.3}= 10 cm) thinning wood: 1) the traditional two-machine (harvester and forwarder) system, and 2) the harwarder system (i.e. the same machine performs both felling and haulage to the roadside). At present, there are more than 20 energy wood harwarders in use in Finland. However, there have been no comprehensive studies carried out on the energy wood harwarders. This paper looks into the productivity results obtained with energy wood harwarders. In addition, the energy wood harvesting costs for harwarders are compared with those of the two-machine system. The results clearly indicated what kind of machine resources can be profitably allocated to different whole-tree harvesting sites. The energy wood harwarders should be directed towards harvesting sites where the forwarding distances are small, the trees harvested are relatively small, and the total volume of energy wood removed is quite low. Respectively, when the stem size removed is relatively large in young stands, and the forest haulage distances are long, the traditional two-machine system is more competitive.

  20. Roll and roll-to-roll process scaling through development of a compact flexo unit for printing of back electrodes

    DEFF Research Database (Denmark)

    Dam, Henrik Friis; Andersen, Thomas Rieks; Madsen, Morten Vesterager

    2015-01-01

    some of the most critical steps in the scaling process. We describe the development of such a machine that comprise web guiding, tension control and surface treatment in a compact desk size that is easily moved around and also detail the development of a small cassette based flexographic unit for back...... electrode printing that is parsimonious in terms of ink usage and more gentle than laboratory scale flexo units where the foil transport is either driven by the flexo unit or the flexo unit is driven by the foil transport. We demonstrate fully operational flexible polymer solar cell manufacture using...

  1. Characteristics of the Arcing Plasma Formation Effect in Spark-Assisted Chemical Engraving of Glass, Based on Machine Vision.

    Science.gov (United States)

    Ho, Chao-Ching; Wu, Dung-Sheng

    2018-03-22

    Spark-assisted chemical engraving (SACE) is a non-traditional machining technology that is used to machine electrically non-conducting materials including glass, ceramics, and quartz. The processing accuracy, machining efficiency, and reproducibility are the key factors in the SACE process. In the present study, a machine vision method is applied to monitor and estimate the status of a SACE-drilled hole in quartz glass. During the machining of quartz glass, the spring-fed tool electrode was pre-pressured on the quartz glass surface to feed the electrode that was in contact with the machining surface of the quartz glass. In situ image acquisition and analysis of the SACE drilling processes were used to analyze the captured image of the state of the spark discharge at the tip and sidewall of the electrode. The results indicated an association between the accumulative size of the SACE-induced spark area and deepness of the hole. The results indicated that the evaluated depths of the SACE-machined holes were a proportional function of the accumulative spark size with a high degree of correlation. The study proposes an innovative computer vision-based method to estimate the deepness and status of SACE-drilled holes in real time.

  2. Characteristics of the Arcing Plasma Formation Effect in Spark-Assisted Chemical Engraving of Glass, Based on Machine Vision

    Directory of Open Access Journals (Sweden)

    Chao-Ching Ho

    2018-03-01

    Full Text Available Spark-assisted chemical engraving (SACE is a non-traditional machining technology that is used to machine electrically non-conducting materials including glass, ceramics, and quartz. The processing accuracy, machining efficiency, and reproducibility are the key factors in the SACE process. In the present study, a machine vision method is applied to monitor and estimate the status of a SACE-drilled hole in quartz glass. During the machining of quartz glass, the spring-fed tool electrode was pre-pressured on the quartz glass surface to feed the electrode that was in contact with the machining surface of the quartz glass. In situ image acquisition and analysis of the SACE drilling processes were used to analyze the captured image of the state of the spark discharge at the tip and sidewall of the electrode. The results indicated an association between the accumulative size of the SACE-induced spark area and deepness of the hole. The results indicated that the evaluated depths of the SACE-machined holes were a proportional function of the accumulative spark size with a high degree of correlation. The study proposes an innovative computer vision-based method to estimate the deepness and status of SACE-drilled holes in real time.

  3. Electricity of machine tool

    International Nuclear Information System (INIS)

    Gijeon media editorial department

    1977-10-01

    This book is divided into three parts. The first part deals with electricity machine, which can taints from generator to motor, motor a power source of machine tool, electricity machine for machine tool such as switch in main circuit, automatic machine, a knife switch and pushing button, snap switch, protection device, timer, solenoid, and rectifier. The second part handles wiring diagram. This concludes basic electricity circuit of machine tool, electricity wiring diagram in your machine like milling machine, planer and grinding machine. The third part introduces fault diagnosis of machine, which gives the practical solution according to fault diagnosis and the diagnostic method with voltage and resistance measurement by tester.

  4. Effect of the Cutting Tool Geometry on the Tool Wear Resistance When Machining Inconel 625

    Directory of Open Access Journals (Sweden)

    Tomáš Zlámal

    2017-12-01

    Full Text Available The paper deals with the design of a suitable cutting geometry of a tool for the machining of the Inconel 625 nickel alloy. This alloy is among the hard-to-machine refractory alloys that cause very rapid wear on cutting tools. Therefore, SNMG and RCMT indexable cutting insert were used to machine the alloy. The selected insert geometry should prevent notch wear and extend tool life. The alloy was machined under predetermined cutting conditions. The angle of the main edge and thus the size and nature of the wear changed with the depth of the material layer being cut. The criterion for determining a more suitable cutting geometry was the tool’s durability and the roughness of the machined surface.

  5. Effect of the Cutting Tool Geometry on the Tool Wear Resistance when Machining Inconel 625

    Directory of Open Access Journals (Sweden)

    Tomáš Zlámal

    2018-03-01

    Full Text Available The paper deals with the design of a suitable cutting geometry of a tool for the machining of the Inconel 625 nickel alloy. This alloy is among the hard-to-machine refractory alloys that cause very rapid wear on cutting tools. Therefore, SNMG and RCMT indexable cutting insert were used to machine the alloy. The selected insert geometry should prevent notch wear and extend tool life. The alloy was machined under predetermined cutting conditions. The angle of the main edge and thus the size and nature of the wear changed with the depth of the material layer being cut. The criterion for determining a more suitable cutting geometry was the tool’s durability and the roughness of the machined surface.

  6. Machine medical ethics

    CERN Document Server

    Pontier, Matthijs

    2015-01-01

    The essays in this book, written by researchers from both humanities and sciences, describe various theoretical and experimental approaches to adding medical ethics to a machine in medical settings. Medical machines are in close proximity with human beings, and getting closer: with patients who are in vulnerable states of health, who have disabilities of various kinds, with the very young or very old, and with medical professionals. In such contexts, machines are undertaking important medical tasks that require emotional sensitivity, knowledge of medical codes, human dignity, and privacy. As machine technology advances, ethical concerns become more urgent: should medical machines be programmed to follow a code of medical ethics? What theory or theories should constrain medical machine conduct? What design features are required? Should machines share responsibility with humans for the ethical consequences of medical actions? How ought clinical relationships involving machines to be modeled? Is a capacity for e...

  7. A heuristic for the inventory management of smart vending machine systems

    Directory of Open Access Journals (Sweden)

    Yang-Byung Park

    2012-12-01

    Full Text Available Purpose: The purpose of this paper is to propose a heuristic for the inventory management of smart vending machine systems with product substitution under the replenishment point, order-up-to level policy and to evaluate its performance.Design/methodology/approach: The heuristic is developed on the basis of the decoupled approach. An integer linear mathematical model is built to determine the number of product storage compartments and replenishment threshold for each smart vending machine in the system and the Clarke and Wright’s savings algorithm is applied to route vehicles for inventory replenishments of smart vending machines that share the same delivery days. Computational experiments are conducted on several small-size test problems to compare the proposed heuristic with the integrated optimization mathematical model with respect to system profit. Furthermore, a sensitivity analysis is carried out on a medium-size test problem to evaluate the effect of the customer service level on system profit using a computer simulation.Findings: The results show that the proposed heuristic yielded pretty good solutions with 5.7% error rate on average compared to the optimal solutions. The proposed heuristic took about 3 CPU minutes on average in the test problems being consisted of 10 five-product smart vending machines. It was confirmed that the system profit is significantly affected by the customer service level.Originality/value: The inventory management of smart vending machine systems is newly treated. Product substitutions are explicitly considered in the model. The proposed heuristic is effective as well as efficient. It can be easily modified for application to various retail vending settings under a vendor-managed inventory scheme with POS system.

  8. Humanizing machines: Anthropomorphization of slot machines increases gambling.

    Science.gov (United States)

    Riva, Paolo; Sacchi, Simona; Brambilla, Marco

    2015-12-01

    Do people gamble more on slot machines if they think that they are playing against humanlike minds rather than mathematical algorithms? Research has shown that people have a strong cognitive tendency to imbue humanlike mental states to nonhuman entities (i.e., anthropomorphism). The present research tested whether anthropomorphizing slot machines would increase gambling. Four studies manipulated slot machine anthropomorphization and found that exposing people to an anthropomorphized description of a slot machine increased gambling behavior and reduced gambling outcomes. Such findings emerged using tasks that focused on gambling behavior (Studies 1 to 3) as well as in experimental paradigms that included gambling outcomes (Studies 2 to 4). We found that gambling outcomes decrease because participants primed with the anthropomorphic slot machine gambled more (Study 4). Furthermore, we found that high-arousal positive emotions (e.g., feeling excited) played a role in the effect of anthropomorphism on gambling behavior (Studies 3 and 4). Our research indicates that the psychological process of gambling-machine anthropomorphism can be advantageous for the gaming industry; however, this may come at great expense for gamblers' (and their families') economic resources and psychological well-being. (c) 2015 APA, all rights reserved).

  9. Strategies and Principles of Distributed Machine Learning on Big Data

    Directory of Open Access Journals (Sweden)

    Eric P. Xing

    2016-06-01

    Full Text Available The rise of big data has led to new demands for machine learning (ML systems to learn complex models, with millions to billions of parameters, that promise adequate capacity to digest massive datasets and offer powerful predictive analytics (such as high-dimensional latent features, intermediate representations, and decision functions thereupon. In order to run ML algorithms at such scales, on a distributed cluster with tens to thousands of machines, it is often the case that significant engineering efforts are required—and one might fairly ask whether such engineering truly falls within the domain of ML research. Taking the view that “big” ML systems can benefit greatly from ML-rooted statistical and algorithmic insights—and that ML researchers should therefore not shy away from such systems design—we discuss a series of principles and strategies distilled from our recent efforts on industrial-scale ML solutions. These principles and strategies span a continuum from application, to engineering, and to theoretical research and development of big ML systems and architectures, with the goal of understanding how to make them efficient, generally applicable, and supported with convergence and scaling guarantees. They concern four key questions that traditionally receive little attention in ML research: How can an ML program be distributed over a cluster? How can ML computation be bridged with inter-machine communication? How can such communication be performed? What should be communicated between machines? By exposing underlying statistical and algorithmic characteristics unique to ML programs but not typically seen in traditional computer programs, and by dissecting successful cases to reveal how we have harnessed these principles to design and develop both high-performance distributed ML software as well as general-purpose ML frameworks, we present opportunities for ML researchers and practitioners to further shape and enlarge the area

  10. EAST machine assembly and its measurement system

    International Nuclear Information System (INIS)

    Wu, S.T.

    2005-01-01

    The EAST (HT-7U) superconducting tokamak consists of a superconducting poloidal field magnet system, a toroidal field magnet system, a vacuum vessel and in-vessel components, thermal shields and a cryostat vessel. The main parts of the machine have been delivered to ASIPP (Institute of Plasma Physics, Chinese Academy of Sciences) successionally from 2003. For its complicated constitution and precise requirement, a reasonable assembly procedure and measurement technique should be defined carefully. Before the assembly procedure, a reference frame has been set up with reference fiducial targets on the wall of the test hall by an industrial measurement system. After the torus of TF coils is formed, a new reference frame will be set up from the position of the TF torus. The vacuum vessel with all inner parts will be installed with reference of the new reference frame. The big size and mass of components, special configuration of the superconducting machine with tight installation tolerances of the HT-7U (EAST) machine result in complicated assembly procedure. The procedure had begun with the installation of the support frame and the base of cryostat vessel last year. In this paper, the requirements of the assembly precise for some key components of the machine are described. The reference frame for the assembly and maintenance is explained. The assembly procedure is introduced

  11. The Large Scale Machine Learning in an Artificial Society: Prediction of the Ebola Outbreak in Beijing

    Directory of Open Access Journals (Sweden)

    Peng Zhang

    2015-01-01

    Full Text Available Ebola virus disease (EVD distinguishes its feature as high infectivity and mortality. Thus, it is urgent for governments to draw up emergency plans against Ebola. However, it is hard to predict the possible epidemic situations in practice. Luckily, in recent years, computational experiments based on artificial society appeared, providing a new approach to study the propagation of EVD and analyze the corresponding interventions. Therefore, the rationality of artificial society is the key to the accuracy and reliability of experiment results. Individuals’ behaviors along with travel mode directly affect the propagation among individuals. Firstly, artificial Beijing is reconstructed based on geodemographics and machine learning is involved to optimize individuals’ behaviors. Meanwhile, Ebola course model and propagation model are built, according to the parameters in West Africa. Subsequently, propagation mechanism of EVD is analyzed, epidemic scenario is predicted, and corresponding interventions are presented. Finally, by simulating the emergency responses of Chinese government, the conclusion is finally drawn that Ebola is impossible to outbreak in large scale in the city of Beijing.

  12. Finite-State Complexity and the Size of Transducers

    Directory of Open Access Journals (Sweden)

    Cristian Calude

    2010-08-01

    Full Text Available Finite-state complexity is a variant of algorithmic information theory obtained by replacing Turing machines with finite transducers. We consider the state-size of transducers needed for minimal descriptions of arbitrary strings and, as our main result, we show that the state-size hierarchy with respect to a standard encoding is infinite. We consider also hierarchies yielded by more general computable encodings.

  13. MACHINE-TRANSFORMER UNITS FOR WIND TURBINES

    Directory of Open Access Journals (Sweden)

    V.I. Panchenko

    2016-03-01

    Full Text Available Background. Electric generators of wind turbines must meet the following requirements: they must be multi-pole; to have a minimum size and weight; to be non-contact, but controlled; to ensure the maximum possible output voltage when working on the power supply system. Multipole and contactless are relatively simply realized in the synchronous generator with permanent magnet excitation and synchronous inductor generator with electromagnetic excitation; moreover the first one has a disadvantage that there is no possibility to control the output voltage, and the second one has a low magnetic leakage coefficient with the appropriate consequences. Purpose. To compare machine dimensions and weight of the transformer unit with induction generators and is an opportunity to prove their application for systems with low RMS-growth rotation. Methodology. A new design of the electric inductor machine called in technical literature as machine-transformer unit (MTU is presented. A ratio for estimated capacity determination of such units is obtained. Results. In a specific example it is shown that estimated power of MTU may exceed the same one for traditional synchronous machines at the same dimensions. The MTU design allows placement of stator coil at some distance from the rotating parts of the machine, namely, in a closed container filled with insulating liquid. This will increase capacity by means of more efficient cooling of coil, as well as to increase the output voltage of the MTU as a generator to a level of 35 kV or more. The recommendations on the certain parameters selection of the MTU stator winding are presented. The formulas for copper cost calculating on the MTU field winding and synchronous salient-pole generator are developed. In a specific example it is shown that such costs in synchronous generator exceed 2.5 times the similar ones in the MTU.

  14. Impact of oscillations of shafts on machining accuracy using non-stationary machines

    Science.gov (United States)

    Fedorenko, M. A.; Bondarenko, J. A.; Pogonin, A. A.

    2018-03-01

    The solution of the problem of restoring parts and units of equipment of the large mass and size is possible on the basis of the development of the research base, including the development of models and theoretical relations, revealing complex reasons for causes of damage and equipment failure. This allows one to develop new effective technologies of maintenance and repair, implementation of which ensures the efficiency and durability of the machines. The development of new forms of technical maintenance and repair of equipment, based on a systematic evaluation of its technical condition with the help of modern diagnostic tools can significantly reduce the duration of the downtime.

  15. A Two-Layer Least Squares Support Vector Machine Approach to Credit Risk Assessment

    Science.gov (United States)

    Liu, Jingli; Li, Jianping; Xu, Weixuan; Shi, Yong

    Least squares support vector machine (LS-SVM) is a revised version of support vector machine (SVM) and has been proved to be a useful tool for pattern recognition. LS-SVM had excellent generalization performance and low computational cost. In this paper, we propose a new method called two-layer least squares support vector machine which combines kernel principle component analysis (KPCA) and linear programming form of least square support vector machine. With this method sparseness and robustness is obtained while solving large dimensional and large scale database. A U.S. commercial credit card database is used to test the efficiency of our method and the result proved to be a satisfactory one.

  16. MoleculeNet: a benchmark for molecular machine learning.

    Science.gov (United States)

    Wu, Zhenqin; Ramsundar, Bharath; Feinberg, Evan N; Gomes, Joseph; Geniesse, Caleb; Pappu, Aneesh S; Leswing, Karl; Pande, Vijay

    2018-01-14

    Molecular machine learning has been maturing rapidly over the last few years. Improved methods and the presence of larger datasets have enabled machine learning algorithms to make increasingly accurate predictions about molecular properties. However, algorithmic progress has been limited due to the lack of a standard benchmark to compare the efficacy of proposed methods; most new algorithms are benchmarked on different datasets making it challenging to gauge the quality of proposed methods. This work introduces MoleculeNet, a large scale benchmark for molecular machine learning. MoleculeNet curates multiple public datasets, establishes metrics for evaluation, and offers high quality open-source implementations of multiple previously proposed molecular featurization and learning algorithms (released as part of the DeepChem open source library). MoleculeNet benchmarks demonstrate that learnable representations are powerful tools for molecular machine learning and broadly offer the best performance. However, this result comes with caveats. Learnable representations still struggle to deal with complex tasks under data scarcity and highly imbalanced classification. For quantum mechanical and biophysical datasets, the use of physics-aware featurizations can be more important than choice of particular learning algorithm.

  17. Size effect studies on geometrically scaled three point bend type specimens with U-notches

    Energy Technology Data Exchange (ETDEWEB)

    Krompholz, K.; Kalkhof, D.; Groth, E

    2001-02-01

    One of the objectives of the REVISA project (REactor Vessel Integrity in Severe Accidents) is to assess size and scale effects in plastic flow and failure. This includes an experimental programme devoted to characterising the influence of specimen size, strain rate, and strain gradients at various temperatures. One of the materials selected was the forged reactor pressure vessel material 20 MnMoNi 55, material number 1.6310 (heat number 69906). Among others, a size effect study of the creep response of this material was performed, using geometrically similar smooth specimens with 5 mm and 20 mm diameter. The tests were done under constant load in an inert atmosphere at 700 {sup o}C, 800 {sup o}C, and 900 {sup o}C, close to and within the phase transformation regime. The mechanical stresses varied from 10 MPa to 30 MPa, depending on temperature. Prior to creep testing the temperature and time dependence of scale oxidation as well as the temperature regime of the phase transformation was determined. The creep tests were supplemented by metallographical investigations.The test results are presented in form of creep curves strain versus time from which characteristic creep data were determined as a function of the stress level at given temperatures. The characteristic data are the times to 5% and 15% strain and to rupture, the secondary (minimum) creep rate, the elongation at fracture within the gauge length, the type of fracture and the area reduction after fracture. From metallographical investigations the accent's phase contents at different temperatures could be estimated. From these data also the parameters of the regression calculation (e.g. Norton's creep law) were obtained. The evaluation revealed that the creep curves and characteristic data are size dependent of varying degree, depending on the stress and temperature level, but the size influence cannot be related to corrosion or orientation effects or to macroscopic heterogeneity (position effect

  18. Code-expanded radio access protocol for machine-to-machine communications

    DEFF Research Database (Denmark)

    Thomsen, Henning; Kiilerich Pratas, Nuno; Stefanovic, Cedomir

    2013-01-01

    The random access methods used for support of machine-to-machine, also referred to as Machine-Type Communications, in current cellular standards are derivatives of traditional framed slotted ALOHA and therefore do not support high user loads efficiently. We propose an approach that is motivated b...... subframes and orthogonal preambles, the amount of available contention resources is drastically increased, enabling the massive support of Machine-Type Communication users that is beyond the reach of current systems.......The random access methods used for support of machine-to-machine, also referred to as Machine-Type Communications, in current cellular standards are derivatives of traditional framed slotted ALOHA and therefore do not support high user loads efficiently. We propose an approach that is motivated...... by the random access method employed in LTE, which significantly increases the amount of contention resources without increasing the system resources, such as contention subframes and preambles. This is accomplished by a logical, rather than physical, extension of the access method in which the available system...

  19. The Basics of Stellites in Machining Perspective

    Directory of Open Access Journals (Sweden)

    Md Shahanur Hasan

    2016-12-01

    Stellite 6 using coated carbide inserts is presented in this paper. Interesting facts on the residual stresses induced by machining processes in Stellite 6 are revealed and analysed. The microhardness variation of machined surfaces of stellite 6 using different tool geometries is investigated in this research review.  It is revealed that coated carbide inserts with a medium-size nose radius perform better in respect of hardness changes and heat generation, producing minimum phase changes on machined surfaces of stellite 6.

  20. Finite size scaling analysis of disordered electron systems

    International Nuclear Information System (INIS)

    Markos, P.

    2012-01-01

    We demonstrated the application of the finite size scaling method to the analysis of the transition of the disordered system from the metallic to the insulating regime. The method enables us to calculate the critical point and the critical exponent which determines the divergence of the correlation length in the vicinity of the critical point. The universality of the metal-insulator transition was verified by numerical analysis of various physical parameters and the critical exponent was calculated with high accuracy for different disordered models. Numerically obtained value of the critical exponent for the three dimensional disordered model (1) has been recently supported by the semi-analytical work and verified by experimental optical measurements equivalent to the three dimensional disordered model (1). Another unsolved problem of the localization is the disagreement between numerical results and predictions of the analytical theories. At present, no analytical theory confirms numerically obtained values of critical exponents. The reason for this disagreement lies in the statistical character of the process of localization. The theory must consider all possible scattering processes on randomly distributed impurities. All physical variables are statistical quantities with broad probability distributions. It is in general not know how to calculate analytically their mean values. We believe that detailed numerical analysis of various disordered systems bring inspiration for the formulation of analytical theory. (authors)

  1. Scale modelling in LMFBR safety

    International Nuclear Information System (INIS)

    Cagliostro, D.J.; Florence, A.L.; Abrahamson, G.R.

    1979-01-01

    This paper reviews scale modelling techniques used in studying the structural response of LMFBR vessels to HCDA loads. The geometric, material, and dynamic similarity parameters are presented and identified using the methods of dimensional analysis. Complete similarity of the structural response requires that each similarity parameter be the same in the model as in the prototype. The paper then focuses on the methods, limitations, and problems of duplicating these parameters in scale models and mentions an experimental technique for verifying the scaling. Geometric similarity requires that all linear dimensions of the prototype be reduced in proportion to the ratio of a characteristic dimension of the model to that of the prototype. The overall size of the model depends on the structural detail required, the size of instrumentation, and the costs of machining and assemblying the model. Material similarity requires that the ratio of the density, bulk modulus, and constitutive relations for the structure and fluid be the same in the model as in the prototype. A practical choice of a material for the model is one with the same density and stress-strain relationship as the operating temperature. Ni-200 and water are good simulant materials for the 304 SS vessel and the liquid sodium coolant, respectively. Scaling of the strain rate sensitivity and fracture toughness of materials is very difficult, but may not be required if these effects do not influence the structural response of the reactor components. Dynamic similarity requires that the characteristic pressure of a simulant source equal that of the prototype HCDA for geometrically similar volume changes. The energy source is calibrated in the geometry and environment in which it will be used to assure that heat transfer between high temperature loading sources and the coolant simulant and that non-equilibrium effects in two-phase sources are accounted for. For the geometry and flow conitions of interest, the

  2. Assessing Electronic Cigarette-Related Tweets for Sentiment and Content Using Supervised Machine Learning.

    Science.gov (United States)

    Cole-Lewis, Heather; Varghese, Arun; Sanders, Amy; Schwarz, Mary; Pugatch, Jillian; Augustson, Erik

    2015-08-25

    Electronic cigarettes (e-cigarettes) continue to be a growing topic among social media users, especially on Twitter. The ability to analyze conversations about e-cigarettes in real-time can provide important insight into trends in the public's knowledge, attitudes, and beliefs surrounding e-cigarettes, and subsequently guide public health interventions. Our aim was to establish a supervised machine learning algorithm to build predictive classification models that assess Twitter data for a range of factors related to e-cigarettes. Manual content analysis was conducted for 17,098 tweets. These tweets were coded for five categories: e-cigarette relevance, sentiment, user description, genre, and theme. Machine learning classification models were then built for each of these five categories, and word groupings (n-grams) were used to define the feature space for each classifier. Predictive performance scores for classification models indicated that the models correctly labeled the tweets with the appropriate variables between 68.40% and 99.34% of the time, and the percentage of maximum possible improvement over a random baseline that was achieved by the classification models ranged from 41.59% to 80.62%. Classifiers with the highest performance scores that also achieved the highest percentage of the maximum possible improvement over a random baseline were Policy/Government (performance: 0.94; % improvement: 80.62%), Relevance (performance: 0.94; % improvement: 75.26%), Ad or Promotion (performance: 0.89; % improvement: 72.69%), and Marketing (performance: 0.91; % improvement: 72.56%). The most appropriate word-grouping unit (n-gram) was 1 for the majority of classifiers. Performance continued to marginally increase with the size of the training dataset of manually annotated data, but eventually leveled off. Even at low dataset sizes of 4000 observations, performance characteristics were fairly sound. Social media outlets like Twitter can uncover real-time snapshots of

  3. Cell-size distribution and scaling in a one-dimensional Kolmogorov-Johnson-Mehl-Avrami lattice model with continuous nucleation

    Science.gov (United States)

    Néda, Zoltán; Járai-Szabó, Ferenc; Boda, Szilárd

    2017-10-01

    The Kolmogorov-Johnson-Mehl-Avrami (KJMA) growth model is considered on a one-dimensional (1D) lattice. Cells can grow with constant speed and continuously nucleate on the empty sites. We offer an alternative mean-field-like approach for describing theoretically the dynamics and derive an analytical cell-size distribution function. Our method reproduces the same scaling laws as the KJMA theory and has the advantage that it leads to a simple closed form for the cell-size distribution function. It is shown that a Weibull distribution is appropriate for describing the final cell-size distribution. The results are discussed in comparison with Monte Carlo simulation data.

  4. The efficacy of support vector machines (SVM)

    Indian Academy of Sciences (India)

    (2006) by applying an SVM statistical learning machine on the time-scale wavelet decomposition methods. We used the data of 108 events in central Japan with magnitude ranging from 3 to 7.4 recorded at KiK-net network stations, for a source–receiver distance of up to 150 km during the period 1998–2011. We applied a ...

  5. Reactive power generation in high speed induction machines by continuously occurring space-transients

    Science.gov (United States)

    Laithwaite, E. R.; Kuznetsov, S. B.

    1980-09-01

    A new technique of continuously generating reactive power from the stator of a brushless induction machine is conceived and tested on a 10-kw linear machine and on 35 and 150 rotary cage motors. An auxiliary magnetic wave traveling at rotor speed is artificially created by the space-transient attributable to the asymmetrical stator winding. At least two distinct windings of different pole-pitch must be incorporated. This rotor wave drifts in and out of phase repeatedly with the stator MMF wave proper and the resulting modulation of the airgap flux is used to generate reactive VA apart from that required for magnetization or leakage flux. The VAR generation effect increases with machine size, and leading power factor operation of the entire machine is viable for large industrial motors and power system induction generators.

  6. Some trends in man-machine interface design for industrial process plants

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1980-01-01

    . In the paper, problems related to interface design, operator training and human reliability are discussed in the light of this technological development, and an integrated approach to system design based on a consistent model or framework describing the man-machine interaction is advocated.The work presented......The demands for an efficient and reliable man-machine inter-face in industrial process plant are increasing due to the steadily growing size and complexity of installations. At the same time, computerized technology offers the possibility of powerful and effective solutions to designers...

  7. Simple machines

    CERN Document Server

    Graybill, George

    2007-01-01

    Just how simple are simple machines? With our ready-to-use resource, they are simple to teach and easy to learn! Chocked full of information and activities, we begin with a look at force, motion and work, and examples of simple machines in daily life are given. With this background, we move on to different kinds of simple machines including: Levers, Inclined Planes, Wedges, Screws, Pulleys, and Wheels and Axles. An exploration of some compound machines follows, such as the can opener. Our resource is a real time-saver as all the reading passages, student activities are provided. Presented in s

  8. High Accuracy Nonlinear Control and Estimation for Machine Tool Systems

    DEFF Research Database (Denmark)

    Papageorgiou, Dimitrios

    Component mass production has been the backbone of industry since the second industrial revolution, and machine tools are producing parts of widely varying size and design complexity. The ever-increasing level of automation in modern manufacturing processes necessitates the use of more...... sophisticated machine tool systems that are adaptable to different workspace conditions, while at the same time being able to maintain very narrow workpiece tolerances. The main topic of this thesis is to suggest control methods that can maintain required manufacturing tolerances, despite moderate wear and tear....... The purpose is to ensure that full accuracy is maintained between service intervals and to advice when overhaul is needed. The thesis argues that quality of manufactured components is directly related to the positioning accuracy of the machine tool axes, and it shows which low level control architectures...

  9. Modernity of parts in casting machines and coefficients of total productive maintenance

    Directory of Open Access Journals (Sweden)

    S. Borkowski

    2010-10-01

    Full Text Available The goal of this study is to investigate the impact of equipment efficiency in casting machines on the quality of die castings made of Al-Si alloys in consideration of their modernity. Analysis focused on two cold-chamber die-casting machines. The assessment of modernity ofthe equipment was made based on ABC analysis of technology and Parker’s scale. Then, the coefficients of total productive maintenance(TPM were employed for assessment of the efficiency of both machines. Using correlation coefficients r allowed authors to demonstrate the relationships between individual TPM coefficients and the number of non-conforming products. The finding of the study is pointing to the differences between the factors which determine the quality of castings resulting from the level of modernity of machines.

  10. Advances Towards Synthetic Machines at the Molecular and Nanoscale Level

    Directory of Open Access Journals (Sweden)

    Kristina Konstas

    2010-06-01

    Full Text Available The fabrication of increasingly smaller machines to the nanometer scale can be achieved by either a “top-down” or “bottom-up” approach. While the former is reaching its limits of resolution, the latter is showing promise for the assembly of molecular components, in a comparable approach to natural systems, to produce functioning ensembles in a controlled and predetermined manner. In this review we focus on recent progress in molecular systems that act as molecular machine prototypes such as switches, motors, vehicles and logic operators.

  11. Machine performance assessment and enhancement for a hexapod machine

    Energy Technology Data Exchange (ETDEWEB)

    Mou, J.I. [Arizona State Univ., Tempe, AZ (United States); King, C. [Sandia National Labs., Livermore, CA (United States). Integrated Manufacturing Systems Center

    1998-03-19

    The focus of this study is to develop a sensor fused process modeling and control methodology to model, assess, and then enhance the performance of a hexapod machine for precision product realization. Deterministic modeling technique was used to derive models for machine performance assessment and enhancement. Sensor fusion methodology was adopted to identify the parameters of the derived models. Empirical models and computational algorithms were also derived and implemented to model, assess, and then enhance the machine performance. The developed sensor fusion algorithms can be implemented on a PC-based open architecture controller to receive information from various sensors, assess the status of the process, determine the proper action, and deliver the command to actuators for task execution. This will enhance a hexapod machine`s capability to produce workpieces within the imposed dimensional tolerances.

  12. Superconducting rotating machines

    International Nuclear Information System (INIS)

    Smith, J.L. Jr.; Kirtley, J.L. Jr.; Thullen, P.

    1975-01-01

    The opportunities and limitations of the applications of superconductors in rotating electric machines are given. The relevant properties of superconductors and the fundamental requirements for rotating electric machines are discussed. The current state-of-the-art of superconducting machines is reviewed. Key problems, future developments and the long range potential of superconducting machines are assessed

  13. Scaling up machine learning: parallel and distributed approaches

    National Research Council Canada - National Science Library

    Bekkerman, Ron; Bilenko, Mikhail; Langford, John

    2012-01-01

    .... Demand for parallelizing learning algorithms is highly task-specific: in some settings it is driven by the enormous dataset sizes, in others by model complexity or by real-time performance requirements...

  14. The use of machine learning and nonlinear statistical tools for ADME prediction.

    Science.gov (United States)

    Sakiyama, Yojiro

    2009-02-01

    Absorption, distribution, metabolism and excretion (ADME)-related failure of drug candidates is a major issue for the pharmaceutical industry today. Prediction of ADME by in silico tools has now become an inevitable paradigm to reduce cost and enhance efficiency in pharmaceutical research. Recently, machine learning as well as nonlinear statistical tools has been widely applied to predict routine ADME end points. To achieve accurate and reliable predictions, it would be a prerequisite to understand the concepts, mechanisms and limitations of these tools. Here, we have devised a small synthetic nonlinear data set to help understand the mechanism of machine learning by 2D-visualisation. We applied six new machine learning methods to four different data sets. The methods include Naive Bayes classifier, classification and regression tree, random forest, Gaussian process, support vector machine and k nearest neighbour. The results demonstrated that ensemble learning and kernel machine displayed greater accuracy of prediction than classical methods irrespective of the data set size. The importance of interaction with the engineering field is also addressed. The results described here provide insights into the mechanism of machine learning, which will enable appropriate usage in the future.

  15. Applications of random forest feature selection for fine-scale genetic population assignment.

    Science.gov (United States)

    Sylvester, Emma V A; Bentzen, Paul; Bradbury, Ian R; Clément, Marie; Pearce, Jon; Horne, John; Beiko, Robert G

    2018-02-01

    Genetic population assignment used to inform wildlife management and conservation efforts requires panels of highly informative genetic markers and sensitive assignment tests. We explored the utility of machine-learning algorithms (random forest, regularized random forest and guided regularized random forest) compared with F ST ranking for selection of single nucleotide polymorphisms (SNP) for fine-scale population assignment. We applied these methods to an unpublished SNP data set for Atlantic salmon ( Salmo salar ) and a published SNP data set for Alaskan Chinook salmon ( Oncorhynchus tshawytscha ). In each species, we identified the minimum panel size required to obtain a self-assignment accuracy of at least 90% using each method to create panels of 50-700 markers Panels of SNPs identified using random forest-based methods performed up to 7.8 and 11.2 percentage points better than F ST -selected panels of similar size for the Atlantic salmon and Chinook salmon data, respectively. Self-assignment accuracy ≥90% was obtained with panels of 670 and 384 SNPs for each data set, respectively, a level of accuracy never reached for these species using F ST -selected panels. Our results demonstrate a role for machine-learning approaches in marker selection across large genomic data sets to improve assignment for management and conservation of exploited populations.

  16. Sustainable machining

    CERN Document Server

    2017-01-01

    This book provides an overview on current sustainable machining. Its chapters cover the concept in economic, social and environmental dimensions. It provides the reader with proper ways to handle several pollutants produced during the machining process. The book is useful on both undergraduate and postgraduate levels and it is of interest to all those working with manufacturing and machining technology.

  17. Advanced Electrical Machines and Machine-Based Systems for Electric and Hybrid Vehicles

    Directory of Open Access Journals (Sweden)

    Ming Cheng

    2015-09-01

    Full Text Available The paper presents a number of advanced solutions on electric machines and machine-based systems for the powertrain of electric vehicles (EVs. Two types of systems are considered, namely the drive systems designated to the EV propulsion and the power split devices utilized in the popular series-parallel hybrid electric vehicle architecture. After reviewing the main requirements for the electric drive systems, the paper illustrates advanced electric machine topologies, including a stator permanent magnet (stator-PM motor, a hybrid-excitation motor, a flux memory motor and a redundant motor structure. Then, it illustrates advanced electric drive systems, such as the magnetic-geared in-wheel drive and the integrated starter generator (ISG. Finally, three machine-based implementations of the power split devices are expounded, built up around the dual-rotor PM machine, the dual-stator PM brushless machine and the magnetic-geared dual-rotor machine. As a conclusion, the development trends in the field of electric machines and machine-based systems for EVs are summarized.

  18. Asynchronized synchronous machines

    CERN Document Server

    Botvinnik, M M

    1964-01-01

    Asynchronized Synchronous Machines focuses on the theoretical research on asynchronized synchronous (AS) machines, which are "hybrids” of synchronous and induction machines that can operate with slip. Topics covered in this book include the initial equations; vector diagram of an AS machine; regulation in cases of deviation from the law of full compensation; parameters of the excitation system; and schematic diagram of an excitation regulator. The possible applications of AS machines and its calculations in certain cases are also discussed. This publication is beneficial for students and indiv

  19. Computational capabilities of multilayer committee machines

    Energy Technology Data Exchange (ETDEWEB)

    Neirotti, J P [NCRG, Aston University, Birmingham (United Kingdom); Franco, L, E-mail: j.p.neirotti@aston.ac.u [Depto. de Lenguajes y Ciencias de la Computacion, Universidad de Malaga (Spain)

    2010-11-05

    We obtained an analytical expression for the computational complexity of many layered committee machines with a finite number of hidden layers (L < {infinity}) using the generalization complexity measure introduced by Franco et al (2006) IEEE Trans. Neural Netw. 17 578. Although our result is valid in the large-size limit and for an overlap synaptic matrix that is ultrametric, it provides a useful tool for inferring the appropriate architecture a network must have to reproduce an arbitrary realizable Boolean function.

  20. Large Scale Behavior and Droplet Size Distributions in Crude Oil Jets and Plumes

    Science.gov (United States)

    Katz, Joseph; Murphy, David; Morra, David

    2013-11-01

    The 2010 Deepwater Horizon blowout introduced several million barrels of crude oil into the Gulf of Mexico. Injected initially as a turbulent jet containing crude oil and gas, the spill caused formation of a subsurface plume stretching for tens of miles. The behavior of such buoyant multiphase plumes depends on several factors, such as the oil droplet and bubble size distributions, current speed, and ambient stratification. While large droplets quickly rise to the surface, fine ones together with entrained seawater form intrusion layers. Many elements of the physics of droplet formation by an immiscible turbulent jet and their resulting size distribution have not been elucidated, but are known to be significantly influenced by the addition of dispersants, which vary the Weber Number by orders of magnitude. We present experimental high speed visualizations of turbulent jets of sweet petroleum crude oil (MC 252) premixed with Corexit 9500A dispersant at various dispersant to oil ratios. Observations were conducted in a 0.9 m × 0.9 m × 2.5 m towing tank, where large-scale behavior of the jet, both stationary and towed at various speeds to simulate cross-flow, have been recorded at high speed. Preliminary data on oil droplet size and spatial distributions were also measured using a videoscope and pulsed light sheet. Sponsored by Gulf of Mexico Research Initiative (GoMRI).

  1. Machine Shop Lathes.

    Science.gov (United States)

    Dunn, James

    This guide, the second in a series of five machine shop curriculum manuals, was designed for use in machine shop courses in Oklahoma. The purpose of the manual is to equip students with basic knowledge and skills that will enable them to enter the machine trade at the machine-operator level. The curriculum is designed so that it can be used in…

  2. Joint multiple fully connected convolutional neural network with extreme learning machine for hepatocellular carcinoma nuclei grading.

    Science.gov (United States)

    Li, Siqi; Jiang, Huiyan; Pang, Wenbo

    2017-05-01

    Accurate cell grading of cancerous tissue pathological image is of great importance in medical diagnosis and treatment. This paper proposes a joint multiple fully connected convolutional neural network with extreme learning machine (MFC-CNN-ELM) architecture for hepatocellular carcinoma (HCC) nuclei grading. First, in preprocessing stage, each grayscale image patch with the fixed size is obtained using center-proliferation segmentation (CPS) method and the corresponding labels are marked under the guidance of three pathologists. Next, a multiple fully connected convolutional neural network (MFC-CNN) is designed to extract the multi-form feature vectors of each input image automatically, which considers multi-scale contextual information of deep layer maps sufficiently. After that, a convolutional neural network extreme learning machine (CNN-ELM) model is proposed to grade HCC nuclei. Finally, a back propagation (BP) algorithm, which contains a new up-sample method, is utilized to train MFC-CNN-ELM architecture. The experiment comparison results demonstrate that our proposed MFC-CNN-ELM has superior performance compared with related works for HCC nuclei grading. Meanwhile, external validation using ICPR 2014 HEp-2 cell dataset shows the good generalization of our MFC-CNN-ELM architecture. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Preliminary study on rotary ultrasonic machining of Bk-7 optical glass rod

    International Nuclear Information System (INIS)

    Hamzah, E.; Izman, S.; Khoo, C.Y.; Zainal Abidin, N.N.

    2007-01-01

    This paper presents an experimental observation on rotary ultrasonic machining (RUM) of BK7 optical glass rod. BK7 is a common technical optical glass for high quality optical components due to its high linear optical transmission in the visible range and is chemically stable. RUM is a hybrid machining process that combines the material removal mechanisms of diamond grinding and ultrasonic machining (USM) and it is non-thermal, non-chemical, creates no change in the microstructure, chemical or physical properties of the work piece. In the RUM, a controlled static load is applied to the rotating core drill with metal bonded diamond abrasive and is ultrasonically vibrated in the axial direction. A water-soluble coolant was used to cool the tool and sample during machining processes. By using DOE (Design of Experiment) approach, the effect of spindle speed and feed rate to the ultrasonic machinability had been developed. The main effects and two-factor interactions of process parameters (spindle speed) and feed rate) on output variables (MRR, surface roughness, opaqueness, chipping thickness and chipping size) are studied. (author)

  4. Cleaning, disassembly, and requalification of the FFTF in vessel handling machine

    International Nuclear Information System (INIS)

    Coops, W.J.

    1977-10-01

    The Engineering Model In Vessel Handling Machine (IVHM) was successfully removed, cleaned, disassembled, inspected, reassembled and reinstalled into the sodium test vessel at Richland, Washington. This was the first time in the United States a full size operational sodium wetted machine has been cleaned by the water vapor nitrogen process and requalified for operation. The work utilized an atmospheric control system during removal, a tank type water vapor nitrogen cleaning system and an open ''hands on'' disassembly and assembly stand. Results of the work indicate the tools, process and equipment are adequate for the non-radioactive maintenance sequence. Additionally, the work proves that a machine of this complexity can be successfully cleaned, maintained and re-used without the need to replace a large percentage of the sodium wetted parts

  5. Testing machine for fatigue crack kinetic investigation in specimens under bending

    International Nuclear Information System (INIS)

    Panasyuk, V.V.; Ratych, L.V.; Dmytrakh, I.N.

    1978-01-01

    A kinematic diagram of testing mashine for the investigation of fatigue crack kinetics in prismatic specimens, subjected to pure bending is described. Suggested is a technique of choosing an optimum ratio of the parameters of ''the testing machine-specimen'' system, which provide the stabilization of the stress intensity coefficient for a certain region of crack development under hard loading. On the example of the 40KhS and 15Kh2MFA steel specimens the pliability of the machine constructed according to the described diagram and designed for the 30ONxm maximum bending moment. The results obtained can be used in designing of the testing machines for studying pure bending under hard loading and in choosing the sizes of specimens with rectangular cross sections for investigations into the kinetics of the fatigue crack

  6. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    Science.gov (United States)

    Mandava, Pitchaiah; Krumpelman, Chase S; Shah, Jharna N; White, Donna L; Kent, Thomas A

    2013-01-01

    Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores ("Shift") is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall pdecrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.

  7. International orientation and export commitment in fast small and medium size firms internationalization: scales validation and implications for the Brazilian case

    Directory of Open Access Journals (Sweden)

    Marcelo André Machado

    Full Text Available Abstract A set of changes in the competitive environment has recently provoked the emergence of a new kind of organization that has since its creation a meaningful share of its revenue being originated from international activities developed in more than one continent. Within this new reality, the internationalization of the firm in phases or according to its growth has resulted in it losing its capacity to explain this process with regard to small- and medium-sized enterprises (SME. Thus, in this paper, the international orientation (IO and export commitment (EC constructs have been revised under a theoretical context of the fast internationalization of medium-sized companies, so as to identify scales that more accurately measure these dimensions in the Brazilian setting. After a literature review and an exploratory research, the IO and EC scales proposed by Knight and Cavusgil (2004 and Shamsuddoha and Ali (2006 were respectively applied to a sample of 398 small- and medium-sized exporting Brazilian companies. In spite of conjunction and situation differences inherent to the Brazilian companies, the selected scales presented high measuring reliability. Furthermore, the field research outcomes provide evidence for the existence of a phenomenon of fast internationalization in medium-sized companies in Brazil, as well as support some theoretical assumptions of other empirical investigations carried out with samples from developed countries.

  8. Effects of the sliding rehabilitation machine on balance and gait in chronic stroke patients - a controlled clinical trial.

    Science.gov (United States)

    Byun, Seung-Deuk; Jung, Tae-Du; Kim, Chul-Hyun; Lee, Yang-Soo

    2011-05-01

    To investigate the effects of a sliding rehabilitation machine on balance and gait in chronic stroke patients. A non-randomized crossover design. Inpatient rehabilitation in a general hospital. Thirty patients with chronic stroke who had medium or high falling risk as determined by the Berg Balance Scale. Participants were divided into two groups and underwent four weeks of training. Group A (n = 15) underwent training with the sliding rehabilitation machine for two weeks with concurrent conventional training, followed by conventional training only for another two weeks. Group B (n = 15) underwent the same training in reverse order. The effect of the experimental period was defined as the sum of changes during training with sliding rehabilitation machine in each group, and the effect of the control period was defined as those during the conventional training only in each group. Functional Ambulation Category, Berg Balance Scale, Six-Minute Walk Test, Timed Up and Go Test, Korean Modified Barthel Index, Modified Ashworth Scale and Manual Muscle Test. Statistically significant improvements were observed in all parameters except Modified Ashworth Scale in the experimental period, but only in Six-Minute Walk Test (P rehabilitation machine may be a useful tool for the improvement of balance and gait abilities in chronic stroke patients.

  9. In-situ monitoring of blood glucose level for dialysis machine by AAA-battery-size ATR Fourier spectroscopy

    Science.gov (United States)

    Hosono, Satsuki; Sato, Shun; Ishida, Akane; Suzuki, Yo; Inohara, Daichi; Nogo, Kosuke; Abeygunawardhana, Pradeep K.; Suzuki, Satoru; Nishiyama, Akira; Wada, Kenji; Ishimaru, Ichiro

    2015-07-01

    For blood glucose level measurement of dialysis machines, we proposed AAA-battery-size ATR (Attenuated total reflection) Fourier spectroscopy in middle infrared light region. The proposed one-shot Fourier spectroscopic imaging is a near-common path and spatial phase-shift interferometer with high time resolution. Because numerous number of spectral data that is 60 (= camera frame rare e.g. 60[Hz]) multiplied by pixel number could be obtained in 1[sec.], statistical-averaging improvement realize high-accurate spectral measurement. We evaluated the quantitative accuracy of our proposed method for measuring glucose concentration in near-infrared light region with liquid cells. We confirmed that absorbance at 1600[nm] had high correlations with glucose concentrations (correlation coefficient: 0.92). But to measure whole-blood, complex light phenomenon caused from red blood cells, that is scattering and multiple reflection or so, deteriorate spectral data. Thus, we also proposed the ultrasound-assisted spectroscopic imaging that traps particles at standing-wave node. Thus, if ATR prism is oscillated mechanically, anti-node area is generated around evanescent light field on prism surface. By elimination complex light phenomenon of red blood cells, glucose concentration in whole-blood will be quantify with high accuracy. In this report, we successfully trapped red blood cells in normal saline solution with ultrasonic standing wave (frequency: 2[MHz]).

  10. Turbo-machine deployment of HTR-10 GT

    International Nuclear Information System (INIS)

    Zhu Shutang; Wang Jie; Zhang Zhengming; Yu Suyuan

    2005-01-01

    As a testing project of gas turbine modular High Temperature Gas-cooled Reactor (HTGR), HTR-10GT has been studied and developed by Institute of Nuclear and New Energy Technology (INET) of Tsinghua University after the success of HTR-10 with steam turbine cycle. The main purposes of this project are to demonstrate the gas turbine modular HTGR, to optimize the deployment of Power Conversion Unit (PCU) and to verify the techniques of turbo-machine, operating modes and controlling measures. HTR-10GT is concentrated on the PCU design and the turbo-machine deployment. Possible turbo-machine deployments have been investigated and two of them are introduced in this paper. The preliminary design for the turbo-machine of HTR-10GT is single-shaft of vertical layout, arranged by the side of the reactor and the turbo-compressor rotary speed was selected to be 250 s -1 (15000 r/min) by considering the efficiency of turbo-compressor blade systems, the strength conditions and the mass and size characteristics of the turbo-compressor. The rotor system will be supported by electromagnetic bearings (EMBs) to curb the possible pollutions of the primary loop. Of all the components in this design, the high speed turbo-generator seems to be a world-wide technical nut. As an alternative design, a gearbox complex is used to reduce the rotary speed from the turbo-compressor 250 s -1 to 50 s -1 so that the ordinary generator can be used. (authors)

  11. Findings of the 2009 Workshop on Statistical Machine Translation

    NARCIS (Netherlands)

    Callison-Burch, C.; Koehn, P.; Monz, C.; Schroeder, J.; Callison-Burch, C.; Koehn, P.; Monz, C.; Schroeder, J.

    2009-01-01

    This paper presents the results of the WMT09 shared tasks, which included a translation task, a system combination task, and an evaluation task. We conducted a large-scale manual evaluation of 87 machine translation systems and 22 system combination entries. We used the ranking of these systems to

  12. Energetics, scaling and sexual size dimorphism of spiders.

    Science.gov (United States)

    Grossi, B; Canals, M

    2015-03-01

    The extreme sexual size dimorphism in spiders has motivated studies for many years. In many species the male can be very small relative to the female. There are several hypotheses trying to explain this fact, most of them emphasizing the role of energy in determining spider size. The aim of this paper is to review the role of energy in sexual size dimorphism of spiders, even for those spiders that do not necessarily live in high foliage, using physical and allometric principles. Here we propose that the cost of transport or equivalently energy expenditure and the speed are traits under selection pressure in male spiders, favoring those of smaller size to reduce travel costs. The morphology of the spiders responds to these selective forces depending upon the lifestyle of the spiders. Climbing and bridging spiders must overcome the force of gravity. If bridging allows faster dispersal, small males would have a selective advantage by enjoying more mating opportunities. In wandering spiders with low population density and as a consequence few male-male interactions, high speed and low energy expenditure or cost of transport should be favored by natural selection. Pendulum mechanics show the advantages of long legs in spiders and their relationship with high speed, even in climbing and bridging spiders. Thus small size, compensated by long legs should be the expected morphology for a fast and mobile male spider.

  13. FY 1992 research and development project for large-scale industrial technologies. Report on results of R and D of superhigh technological machining systems; 1992 nendo chosentan kako system no kenkyu kaihatsu seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1993-03-01

    Described herein are the FY 1992 results of the R and D project aimed at establishment of the technologies for development of, e.g., machine and electronic device members of superhigh precision and high functions by processing and superhigh-precision machining aided by excited beams. The elementary researches on superhigh-precision machining achieve the given targets for precision stability of the feed positioning device. The researches on development of high-precision rotating devices, on a trial basis, are directed to improvement of rotational precision of pneumatic static pressure bearings and magnetism correction/controlling circuits, increasing speed and precision of 3-point type rotational precision measurement methods, and development of rotation-driving motors, achieving rotational precision of 0.015{mu}m at 2000rpm. The researches on the surface modification technologies aided by ion beams involve experiments for production of crystalline Si films and thin-film transistors of the Si films, using the surface-modified portion of a large-size glass substrate. The researches on superhigh-technological machining standard measurement involve development of length-measuring systems aided by a dye laser, achieving a precision of {+-} 10nm or less in a 100mm measurement range. (NEDO)

  14. Machining of Machine Elements Made of Polymer Composite Materials

    Science.gov (United States)

    Baurova, N. I.; Makarov, K. A.

    2017-12-01

    The machining of the machine elements that are made of polymer composite materials (PCMs) or are repaired using them is considered. Turning, milling, and drilling are shown to be most widely used among all methods of cutting PCMs. Cutting conditions for the machining of PCMs are presented. The factors that most strongly affect the roughness parameters and the accuracy of cutting PCMs are considered.

  15. Trends and developments in industrial machine vision: 2013

    Science.gov (United States)

    Niel, Kurt; Heinzl, Christoph

    2014-03-01

    When following current advancements and implementations in the field of machine vision there seems to be no borders for future developments: Calculating power constantly increases, and new ideas are spreading and previously challenging approaches are introduced in to mass market. Within the past decades these advances have had dramatic impacts on our lives. Consumer electronics, e.g. computers or telephones, which once occupied large volumes, now fit in the palm of a hand. To note just a few examples e.g. face recognition was adopted by the consumer market, 3D capturing became cheap, due to the huge community SW-coding got easier using sophisticated development platforms. However, still there is a remaining gap between consumer and industrial applications. While the first ones have to be entertaining, the second have to be reliable. Recent studies (e.g. VDMA [1], Germany) show a moderately increasing market for machine vision in industry. Asking industry regarding their needs the main challenges for industrial machine vision are simple usage and reliability for the process, quick support, full automation, self/easy adjustment at changing process parameters, "forget it in the line". Furthermore a big challenge is to support quality control: Nowadays the operator has to accurately define the tested features for checking the probes. There is an upcoming development also to let automated machine vision applications find out essential parameters in a more abstract level (top down). In this work we focus on three current and future topics for industrial machine vision: Metrology supporting automation, quality control (inline/atline/offline) as well as visualization and analysis of datasets with steadily growing sizes. Finally the general trend of the pixel orientated towards object orientated evaluation is addressed. We do not directly address the field of robotics taking advances from machine vision. This is actually a fast changing area which is worth an own

  16. Reduced wear of enamel with novel fine and nano-scale leucite glass-ceramics.

    Science.gov (United States)

    Theocharopoulos, Antonios; Chen, Xiaohui; Hill, Robert; Cattell, Michael J

    2013-06-01

    Leucite glass-ceramics used to produce all-ceramic restorations can suffer from brittle fracture and wear the opposing teeth. High strength and fine crystal sized leucite glass-ceramics have recently been reported. The objective of this study is to investigate whether fine and nano-scale leucite glass-ceramics with minimal matrix microcracking are associated with a reduction in in vitro tooth wear. Human molar cusps (n=12) were wear tested using a Bionix-858 testing machine (300,000 simulated masticatory cycles) against experimental fine crystal sized (FS), nano-scale crystal sized (NS) leucite glass-ceramics and a commercial leucite glass-ceramic (Ceramco-3, Dentsply, USA). Wear was imaged using Secondary Electron Imaging (SEI) and quantified using white-light profilometry. Both experimental groups were found to produce significantly (pceramic) loss than the FS group. Increased waviness and damage was observed on the wear surfaces of the Ceramco-3 glass-ceramic disc/tooth group in comparison to the experimental groups. This was also indicated by higher surface roughness values for the Ceramco-3 glass-ceramic disc/tooth group. Fine and nano-sized leucite glass-ceramics produced a reduction in in vitro tooth wear. The high strength low wear materials of this study may help address the many problems associated with tooth enamel wear and restoration failure. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Avalanching Systems with Longer Range Connectivity: Occurrence of a Crossover Phenomenon and Multifractal Finite Size Scaling

    Directory of Open Access Journals (Sweden)

    Simone Benella

    2017-07-01

    Full Text Available Many out-of-equilibrium systems respond to external driving with nonlinear and self-similar dynamics. This near scale-invariant behavior of relaxation events has been modeled through sand pile cellular automata. However, a common feature of these models is the assumption of a local connectivity, while in many real systems, we have evidence for longer range connectivity and a complex topology of the interacting structures. Here, we investigate the role that longer range connectivity might play in near scale-invariant systems, by analyzing the results of a sand pile cellular automaton model on a Newman–Watts network. The analysis clearly indicates the occurrence of a crossover phenomenon in the statistics of the relaxation events as a function of the percentage of longer range links and the breaking of the simple Finite Size Scaling (FSS. The more complex nature of the dynamics in the presence of long-range connectivity is investigated in terms of multi-scaling features and analyzed by the Rank-Ordered Multifractal Analysis (ROMA.

  18. Improving Machining Accuracy of CNC Machines with Innovative Design Methods

    Science.gov (United States)

    Yemelyanov, N. V.; Yemelyanova, I. V.; Zubenko, V. L.

    2018-03-01

    The article considers achieving the machining accuracy of CNC machines by applying innovative methods in modelling and design of machining systems, drives and machine processes. The topological method of analysis involves visualizing the system as matrices of block graphs with a varying degree of detail between the upper and lower hierarchy levels. This approach combines the advantages of graph theory and the efficiency of decomposition methods, it also has visual clarity, which is inherent in both topological models and structural matrices, as well as the resiliency of linear algebra as part of the matrix-based research. The focus of the study is on the design of automated machine workstations, systems, machines and units, which can be broken into interrelated parts and presented as algebraic, topological and set-theoretical models. Every model can be transformed into a model of another type, and, as a result, can be interpreted as a system of linear and non-linear equations which solutions determine the system parameters. This paper analyses the dynamic parameters of the 1716PF4 machine at the stages of design and exploitation. Having researched the impact of the system dynamics on the component quality, the authors have developed a range of practical recommendations which have enabled one to reduce considerably the amplitude of relative motion, exclude some resonance zones within the spindle speed range of 0...6000 min-1 and improve machining accuracy.

  19. Machinability of nickel based alloys using electrical discharge machining process

    Science.gov (United States)

    Khan, M. Adam; Gokul, A. K.; Bharani Dharan, M. P.; Jeevakarthikeyan, R. V. S.; Uthayakumar, M.; Thirumalai Kumaran, S.; Duraiselvam, M.

    2018-04-01

    The high temperature materials such as nickel based alloys and austenitic steel are frequently used for manufacturing critical aero engine turbine components. Literature on conventional and unconventional machining of steel materials is abundant over the past three decades. However the machining studies on superalloy is still a challenging task due to its inherent property and quality. Thus this material is difficult to be cut in conventional processes. Study on unconventional machining process for nickel alloys is focused in this proposed research. Inconel718 and Monel 400 are the two different candidate materials used for electrical discharge machining (EDM) process. Investigation is to prepare a blind hole using copper electrode of 6mm diameter. Electrical parameters are varied to produce plasma spark for diffusion process and machining time is made constant to calculate the experimental results of both the material. Influence of process parameters on tool wear mechanism and material removal are considered from the proposed experimental design. While machining the tool has prone to discharge more materials due to production of high energy plasma spark and eddy current effect. The surface morphology of the machined surface were observed with high resolution FE SEM. Fused electrode found to be a spherical structure over the machined surface as clumps. Surface roughness were also measured with surface profile using profilometer. It is confirmed that there is no deviation and precise roundness of drilling is maintained.

  20. Machine implications for detectors and physics

    International Nuclear Information System (INIS)

    Tauchi, Toshiaki

    2001-01-01

    Future linear colliders are very different at many aspects because of low repetition rate (5∼200 Hz) and high accelerating gradient (22∼150 MeV/m). For high luminosity, the beam sizes must be squeezed in extremely small region at interaction point (IP). We briefly describe new phenomena at the IP, i.e. beamstrahlung process, creations of e + e - pairs and minijets. We also report machine implications related to the energy spread, beamstrahlung, bunch-train structure, beam polarizations and backgrounds for detectors and physics

  1. The achievements of the Z-machine; Les exploits de la Z-machine

    Energy Technology Data Exchange (ETDEWEB)

    Larousserie, D

    2008-03-15

    The ZR-machine that represents the latest generation of Z-pinch machines has recently begun preliminary testing before its full commissioning in Albuquerque (Usa). During its test the machine has well operated with electrical currents whose intensities of 26 million Ampere are already 2 times as high as the intensity of the operating current of the previous Z-machine. In 2006 the Z-machine reached temperatures of 2 billions Kelvin while 100 million Kelvin would be sufficient to ignite thermonuclear fusion. In fact the concept of Z-pinch machines was imagined in the fifties but the technological breakthrough that has allowed this recent success and the reborn of Z-machine, was the replacement of gas by an array of metal wires through which the electrical current flows and vaporizes it creating an imploding plasma. It is not well understood why Z-pinch machines generate far more radiation than theoretically expected. (A.C.)

  2. Bio-inspired wooden actuators for large scale applications.

    Directory of Open Access Journals (Sweden)

    Markus Rüggeberg

    Full Text Available Implementing programmable actuation into materials and structures is a major topic in the field of smart materials. In particular the bilayer principle has been employed to develop actuators that respond to various kinds of stimuli. A multitude of small scale applications down to micrometer size have been developed, but up-scaling remains challenging due to either limitations in mechanical stiffness of the material or in the manufacturing processes. Here, we demonstrate the actuation of wooden bilayers in response to changes in relative humidity, making use of the high material stiffness and a good machinability to reach large scale actuation and application. Amplitude and response time of the actuation were measured and can be predicted and controlled by adapting the geometry and the constitution of the bilayers. Field tests in full weathering conditions revealed long-term stability of the actuation. The potential of the concept is shown by a first demonstrator. With the sensor and actuator intrinsically incorporated in the wooden bilayers, the daily change in relative humidity is exploited for an autonomous and solar powered movement of a tracker for solar modules.

  3. Quantum machine learning.

    Science.gov (United States)

    Biamonte, Jacob; Wittek, Peter; Pancotti, Nicola; Rebentrost, Patrick; Wiebe, Nathan; Lloyd, Seth

    2017-09-13

    Fuelled by increasing computer power and algorithmic advances, machine learning techniques have become powerful tools for finding patterns in data. Quantum systems produce atypical patterns that classical systems are thought not to produce efficiently, so it is reasonable to postulate that quantum computers may outperform classical computers on machine learning tasks. The field of quantum machine learning explores how to devise and implement quantum software that could enable machine learning that is faster than that of classical computers. Recent work has produced quantum algorithms that could act as the building blocks of machine learning programs, but the hardware and software challenges are still considerable.

  4. Machine learning derived risk prediction of anorexia nervosa.

    Science.gov (United States)

    Guo, Yiran; Wei, Zhi; Keating, Brendan J; Hakonarson, Hakon

    2016-01-20

    Anorexia nervosa (AN) is a complex psychiatric disease with a moderate to strong genetic contribution. In addition to conventional genome wide association (GWA) studies, researchers have been using machine learning methods in conjunction with genomic data to predict risk of diseases in which genetics play an important role. In this study, we collected whole genome genotyping data on 3940 AN cases and 9266 controls from the Genetic Consortium for Anorexia Nervosa (GCAN), the Wellcome Trust Case Control Consortium 3 (WTCCC3), Price Foundation Collaborative Group and the Children's Hospital of Philadelphia (CHOP), and applied machine learning methods for predicting AN disease risk. The prediction performance is measured by area under the receiver operating characteristic curve (AUC), indicating how well the model distinguishes cases from unaffected control subjects. Logistic regression model with the lasso penalty technique generated an AUC of 0.693, while Support Vector Machines and Gradient Boosted Trees reached AUC's of 0.691 and 0.623, respectively. Using different sample sizes, our results suggest that larger datasets are required to optimize the machine learning models and achieve higher AUC values. To our knowledge, this is the first attempt to assess AN risk based on genome wide genotype level data. Future integration of genomic, environmental and family-based information is likely to improve the AN risk evaluation process, eventually benefitting AN patients and families in the clinical setting.

  5. Machine protection systems

    CERN Document Server

    Macpherson, A L

    2010-01-01

    A summary of the Machine Protection System of the LHC is given, with particular attention given to the outstanding issues to be addressed, rather than the successes of the machine protection system from the 2009 run. In particular, the issues of Safe Machine Parameter system, collimation and beam cleaning, the beam dump system and abort gap cleaning, injection and dump protection, and the overall machine protection program for the upcoming run are summarised.

  6. Machines for lattice gauge theory

    International Nuclear Information System (INIS)

    Mackenzie, P.B.

    1989-05-01

    The most promising approach to the solution of the theory of strong interactions is large scale numerical simulation using the techniques of lattice gauge theory. At the present time, computing requirements for convincing calculations of the properties of hadrons exceed the capabilities of even the most powerful commercial supercomputers. This has led to the development of massively parallel computers dedicated to lattice gauge theory. This talk will discuss the computing requirements behind these machines, and general features of the components and architectures of the half dozen major projects now in existence. 20 refs., 1 fig

  7. Preliminary Test of Upgraded Conventional Milling Machine into PC Based CNC Milling Machine

    International Nuclear Information System (INIS)

    Abdul Hafid

    2008-01-01

    CNC (Computerized Numerical Control) milling machine yields a challenge to make an innovation in the field of machining. With an action job is machining quality equivalent to CNC milling machine, the conventional milling machine ability was improved to be based on PC CNC milling machine. Mechanically and instrumentally change. As a control replacing was conducted by servo drive and proximity were used. Computer programme was constructed to give instruction into milling machine. The program structure of consists GUI model and ladder diagram. Program was put on programming systems called RTX software. The result of up-grade is computer programming and CNC instruction job. The result was beginning step and it will be continued in next time. With upgrading ability milling machine becomes user can be done safe and optimal from accident risk. By improving performance of milling machine, the user will be more working optimal and safely against accident risk. (author)

  8. Manufacturing test of large scale hollow capsule and long length cladding in the large scale oxide dispersion strengthened (ODS) martensitic steel

    International Nuclear Information System (INIS)

    Narita, Takeshi; Ukai, Shigeharu; Kaito, Takeji; Ohtsuka, Satoshi; Fujiwara, Masayuki

    2004-04-01

    Mass production capability of oxide dispersion strengthened (ODS) martensitic steel cladding (9Cr) has being evaluated in the Phase II of the Feasibility Studies on Commercialized Fast Reactor Cycle System. The cost for manufacturing mother tube (raw materials powder production, mechanical alloying (MA) by ball mill, canning, hot extrusion, and machining) is a dominant factor in the total cost for manufacturing ODS ferritic steel cladding. In this study, the large-sale 9Cr-ODS martensitic steel mother tube which is made with a large-scale hollow capsule, and long length claddings were manufactured, and the applicability of these processes was evaluated. Following results were obtained in this study. (1) Manufacturing the large scale mother tube in the dimension of 32 mm OD, 21 mm ID, and 2 m length has been successfully carried out using large scale hollow capsule. This mother tube has a high degree of accuracy in size. (2) The chemical composition and the micro structure of the manufactured mother tube are similar to the existing mother tube manufactured by a small scale can. And the remarkable difference between the bottom and top sides in the manufactured mother tube has not been observed. (3) The long length cladding has been successfully manufactured from the large scale mother tube which was made using a large scale hollow capsule. (4) For reducing the manufacturing cost of the ODS steel claddings, manufacturing process of the mother tubes using a large scale hollow capsules is promising. (author)

  9. Inverse analysis of turbidites by machine learning

    Science.gov (United States)

    Naruse, H.; Nakao, K.

    2017-12-01

    This study aims to propose a method to estimate paleo-hydraulic conditions of turbidity currents from ancient turbidites by using machine-learning technique. In this method, numerical simulation was repeated under various initial conditions, which produces a data set of characteristic features of turbidites. Then, this data set of turbidites is used for supervised training of a deep-learning neural network (NN). Quantities of characteristic features of turbidites in the training data set are given to input nodes of NN, and output nodes are expected to provide the estimates of initial condition of the turbidity current. The optimization of weight coefficients of NN is then conducted to reduce root-mean-square of the difference between the true conditions and the output values of NN. The empirical relationship with numerical results and the initial conditions is explored in this method, and the discovered relationship is used for inversion of turbidity currents. This machine learning can potentially produce NN that estimates paleo-hydraulic conditions from data of ancient turbidites. We produced a preliminary implementation of this methodology. A forward model based on 1D shallow-water equations with a correction of density-stratification effect was employed. This model calculates a behavior of a surge-like turbidity current transporting mixed-size sediment, and outputs spatial distribution of volume per unit area of each grain-size class on the uniform slope. Grain-size distribution was discretized 3 classes. Numerical simulation was repeated 1000 times, and thus 1000 beds of turbidites were used as the training data for NN that has 21000 input nodes and 5 output nodes with two hidden-layers. After the machine learning finished, independent simulations were conducted 200 times in order to evaluate the performance of NN. As a result of this test, the initial conditions of validation data were successfully reconstructed by NN. The estimated values show very small

  10. Mid-size urbanism

    NARCIS (Netherlands)

    Zwart, de B.A.M.

    2013-01-01

    To speak of the project for the mid-size city is to speculate about the possibility of mid-size urbanity as a design category. An urbanism not necessarily defined by the scale of the intervention or the size of the city undergoing transformation, but by the framing of the issues at hand and the

  11. Data Mining and Machine Learning in Astronomy

    Science.gov (United States)

    Ball, Nicholas M.; Brunner, Robert J.

    We review the current state of data mining and machine learning in astronomy. Data Mining can have a somewhat mixed connotation from the point of view of a researcher in this field. If used correctly, it can be a powerful approach, holding the potential to fully exploit the exponentially increasing amount of available data, promising great scientific advance. However, if misused, it can be little more than the black box application of complex computing algorithms that may give little physical insight, and provide questionable results. Here, we give an overview of the entire data mining process, from data collection through to the interpretation of results. We cover common machine learning algorithms, such as artificial neural networks and support vector machines, applications from a broad range of astronomy, emphasizing those in which data mining techniques directly contributed to improving science, and important current and future directions, including probability density functions, parallel algorithms, Peta-Scale computing, and the time domain. We conclude that, so long as one carefully selects an appropriate algorithm and is guided by the astronomical problem at hand, data mining can be very much the powerful tool, and not the questionable black box.

  12. Study of on-machine error identification and compensation methods for micro machine tools

    International Nuclear Information System (INIS)

    Wang, Shih-Ming; Yu, Han-Jen; Lee, Chun-Yi; Chiu, Hung-Sheng

    2016-01-01

    Micro machining plays an important role in the manufacturing of miniature products which are made of various materials with complex 3D shapes and tight machining tolerance. To further improve the accuracy of a micro machining process without increasing the manufacturing cost of a micro machine tool, an effective machining error measurement method and a software-based compensation method are essential. To avoid introducing additional errors caused by the re-installment of the workpiece, the measurement and compensation method should be on-machine conducted. In addition, because the contour of a miniature workpiece machined with a micro machining process is very tiny, the measurement method should be non-contact. By integrating the image re-constructive method, camera pixel correction, coordinate transformation, the error identification algorithm, and trajectory auto-correction method, a vision-based error measurement and compensation method that can on-machine inspect the micro machining errors and automatically generate an error-corrected numerical control (NC) program for error compensation was developed in this study. With the use of the Canny edge detection algorithm and camera pixel calibration, the edges of the contour of a machined workpiece were identified and used to re-construct the actual contour of the work piece. The actual contour was then mapped to the theoretical contour to identify the actual cutting points and compute the machining errors. With the use of a moving matching window and calculation of the similarity between the actual and theoretical contour, the errors between the actual cutting points and theoretical cutting points were calculated and used to correct the NC program. With the use of the error-corrected NC program, the accuracy of a micro machining process can be effectively improved. To prove the feasibility and effectiveness of the proposed methods, micro-milling experiments on a micro machine tool were conducted, and the results

  13. National Machine Guarding Program: Part 1. Machine safeguarding practices in small metal fabrication businesses.

    Science.gov (United States)

    Parker, David L; Yamin, Samuel C; Brosseau, Lisa M; Xi, Min; Gordon, Robert; Most, Ivan G; Stanley, Rodney

    2015-11-01

    Metal fabrication workers experience high rates of traumatic occupational injuries. Machine operators in particular face high risks, often stemming from the absence or improper use of machine safeguarding or the failure to implement lockout procedures. The National Machine Guarding Program (NMGP) was a translational research initiative implemented in conjunction with two workers' compensation insures. Insurance safety consultants trained in machine guarding used standardized checklists to conduct a baseline inspection of machine-related hazards in 221 business. Safeguards at the point of operation were missing or inadequate on 33% of machines. Safeguards for other mechanical hazards were missing on 28% of machines. Older machines were both widely used and less likely than newer machines to be properly guarded. Lockout/tagout procedures were posted at only 9% of machine workstations. The NMGP demonstrates a need for improvement in many aspects of machine safety and lockout in small metal fabrication businesses. © 2015 The Authors. American Journal of Industrial Medicine published by Wiley Periodicals, Inc.

  14. Machine learning molecular dynamics for the simulation of infrared spectra.

    Science.gov (United States)

    Gastegger, Michael; Behler, Jörg; Marquetand, Philipp

    2017-10-01

    Machine learning has emerged as an invaluable tool in many research areas. In the present work, we harness this power to predict highly accurate molecular infrared spectra with unprecedented computational efficiency. To account for vibrational anharmonic and dynamical effects - typically neglected by conventional quantum chemistry approaches - we base our machine learning strategy on ab initio molecular dynamics simulations. While these simulations are usually extremely time consuming even for small molecules, we overcome these limitations by leveraging the power of a variety of machine learning techniques, not only accelerating simulations by several orders of magnitude, but also greatly extending the size of systems that can be treated. To this end, we develop a molecular dipole moment model based on environment dependent neural network charges and combine it with the neural network potential approach of Behler and Parrinello. Contrary to the prevalent big data philosophy, we are able to obtain very accurate machine learning models for the prediction of infrared spectra based on only a few hundreds of electronic structure reference points. This is made possible through the use of molecular forces during neural network potential training and the introduction of a fully automated sampling scheme. We demonstrate the power of our machine learning approach by applying it to model the infrared spectra of a methanol molecule, n -alkanes containing up to 200 atoms and the protonated alanine tripeptide, which at the same time represents the first application of machine learning techniques to simulate the dynamics of a peptide. In all of these case studies we find an excellent agreement between the infrared spectra predicted via machine learning models and the respective theoretical and experimental spectra.

  15. Non-conventional electrical machines

    CERN Document Server

    Rezzoug, Abderrezak

    2013-01-01

    The developments of electrical machines are due to the convergence of material progress, improved calculation tools, and new feeding sources. Among the many recent machines, the authors have chosen, in this first book, to relate the progress in slow speed machines, high speed machines, and superconducting machines. The first part of the book is dedicated to materials and an overview of magnetism, mechanic, and heat transfer.

  16. Scaling law and enhancement of lift generation of an insect-size hovering flexible wing

    Science.gov (United States)

    Kang, Chang-kwon; Shyy, Wei

    2013-01-01

    We report a comprehensive scaling law and novel lift generation mechanisms relevant to the aerodynamic functions of structural flexibility in insect flight. Using a Navier–Stokes equation solver, fully coupled to a structural dynamics solver, we consider the hovering motion of a wing of insect size, in which the dynamics of fluid–structure interaction leads to passive wing rotation. Lift generated on the flexible wing scales with the relative shape deformation parameter, whereas the optimal lift is obtained when the wing deformation synchronizes with the imposed translation, consistent with previously reported observations for fruit flies and honeybees. Systematic comparisons with rigid wings illustrate that the nonlinear response in wing motion results in a greater peak angle compared with a simple harmonic motion, yielding higher lift. Moreover, the compliant wing streamlines its shape via camber deformation to mitigate the nonlinear lift-degrading wing–wake interaction to further enhance lift. These bioinspired aeroelastic mechanisms can be used in the development of flapping wing micro-robots. PMID:23760300

  17. Flexible feature-space-construction architecture and its VLSI implementation for multi-scale object detection

    Science.gov (United States)

    Luo, Aiwen; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Huang, Zunkai; Jürgen Mattausch, Hans

    2018-04-01

    Feature extraction techniques are a cornerstone of object detection in computer-vision-based applications. The detection performance of vison-based detection systems is often degraded by, e.g., changes in the illumination intensity of the light source, foreground-background contrast variations or automatic gain control from the camera. In order to avoid such degradation effects, we present a block-based L1-norm-circuit architecture which is configurable for different image-cell sizes, cell-based feature descriptors and image resolutions according to customization parameters from the circuit input. The incorporated flexibility in both the image resolution and the cell size for multi-scale image pyramids leads to lower computational complexity and power consumption. Additionally, an object-detection prototype for performance evaluation in 65 nm CMOS implements the proposed L1-norm circuit together with a histogram of oriented gradients (HOG) descriptor and a support vector machine (SVM) classifier. The proposed parallel architecture with high hardware efficiency enables real-time processing, high detection robustness, small chip-core area as well as low power consumption for multi-scale object detection.

  18. Advanced Machine learning Algorithm Application for Rotating Machine Health Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Kanemoto, Shigeru; Watanabe, Masaya [The University of Aizu, Aizuwakamatsu (Japan); Yusa, Noritaka [Tohoku University, Sendai (Japan)

    2014-08-15

    The present paper tries to evaluate the applicability of conventional sound analysis techniques and modern machine learning algorithms to rotating machine health monitoring. These techniques include support vector machine, deep leaning neural network, etc. The inner ring defect and misalignment anomaly sound data measured by a rotating machine mockup test facility are used to verify the above various kinds of algorithms. Although we cannot find remarkable difference of anomaly discrimination performance, some methods give us the very interesting eigen patterns corresponding to normal and abnormal states. These results will be useful for future more sensitive and robust anomaly monitoring technology.

  19. Advanced Machine learning Algorithm Application for Rotating Machine Health Monitoring

    International Nuclear Information System (INIS)

    Kanemoto, Shigeru; Watanabe, Masaya; Yusa, Noritaka

    2014-01-01

    The present paper tries to evaluate the applicability of conventional sound analysis techniques and modern machine learning algorithms to rotating machine health monitoring. These techniques include support vector machine, deep leaning neural network, etc. The inner ring defect and misalignment anomaly sound data measured by a rotating machine mockup test facility are used to verify the above various kinds of algorithms. Although we cannot find remarkable difference of anomaly discrimination performance, some methods give us the very interesting eigen patterns corresponding to normal and abnormal states. These results will be useful for future more sensitive and robust anomaly monitoring technology

  20. Determining the size of a complete disturbance landscape: multi-scale, continental analysis of forest change.

    Science.gov (United States)

    Buma, Brian; Costanza, Jennifer K; Riitters, Kurt

    2017-11-21

    The scale of investigation for disturbance-influenced processes plays a critical role in theoretical assumptions about stability, variance, and equilibrium, as well as conservation reserve and long-term monitoring program design. Critical consideration of scale is required for robust planning designs, especially when anticipating future disturbances whose exact locations are unknown. This research quantified disturbance proportion and pattern (as contagion) at multiple scales across North America. This pattern of scale-associated variability can guide selection of study and management extents, for example, to minimize variance (measured as standard deviation) between any landscapes within an ecoregion. We identified the proportion and pattern of forest disturbance (30 m grain size) across multiple landscape extents up to 180 km 2 . We explored the variance in proportion of disturbed area and the pattern of that disturbance between landscapes (within an ecoregion) as a function of the landscape extent. In many ecoregions, variance between landscapes within an ecoregion was minimal at broad landscape extents (low standard deviation). Gap-dominated regions showed the least variance, while fire-dominated showed the largest. Intensively managed ecoregions displayed unique patterns. A majority of the ecoregions showed low variance between landscapes at some scale, indicating an appropriate extent for incorporating natural regimes and unknown future disturbances was identified. The quantification of the scales of disturbance at the ecoregion level provides guidance for individuals interested in anticipating future disturbances which will occur in unknown spatial locations. Information on the extents required to incorporate disturbance patterns into planning is crucial for that process.