WorldWideScience

Sample records for machine size scaling

  1. Size reduction machine

    International Nuclear Information System (INIS)

    Fricke, V.

    1999-01-01

    The Size Reduction Machine (SRM) is a mobile platform capable of shearing various shapes and types of metal components at a variety of elevations. This shearing activity can be performed without direct physical movement and placement of the shear head by the operator. The base unit is manually moved and roughly aligned to each cut location. The base contains the electronics: hydraulic pumps, servos, and actuators needed to move the shear-positioning arm. The movable arm allows the shear head to have six axes of movement and to cut to within 4 inches of a wall surface. The unit has a slick electrostatic capture coating to assist in external decontamination. Internal contamination of the unit is controlled by a high-efficiency particulate air (HEPA) filter on the cooling inlet fan. The unit is compact enough to access areas through a 36-inch standard door opening. This paper is an Innovative Technology Summary Report designed to provide potential users with the information they need to quickly determine if a technology would apply to a particular environmental management problem. They also are designed for readers who may recommend that a technology be considered by prospective users

  2. Finite size scaling theory

    International Nuclear Information System (INIS)

    Rittenberg, V.

    1983-01-01

    Fischer's finite-size scaling describes the cross over from the singular behaviour of thermodynamic quantities at the critical point to the analytic behaviour of the finite system. Recent extensions of the method--transfer matrix technique, and the Hamiltonian formalism--are discussed in this paper. The method is presented, with equations deriving scaling function, critical temperature, and exponent v. As an application of the method, a 3-states Hamiltonian with Z 3 global symmetry is studied. Diagonalization of the Hamiltonian for finite chains allows one to estimate the critical exponents, and also to discover new phase transitions at lower temperatures. The critical points lambda, and indices v estimated for finite-scaling are given

  3. Size scaling of static friction.

    Science.gov (United States)

    Braun, O M; Manini, Nicola; Tosatti, Erio

    2013-02-22

    Sliding friction across a thin soft lubricant film typically occurs by stick slip, the lubricant fully solidifying at stick, yielding and flowing at slip. The static friction force per unit area preceding slip is known from molecular dynamics (MD) simulations to decrease with increasing contact area. That makes the large-size fate of stick slip unclear and unknown; its possible vanishing is important as it would herald smooth sliding with a dramatic drop of kinetic friction at large size. Here we formulate a scaling law of the static friction force, which for a soft lubricant is predicted to decrease as f(m)+Δf/A(γ) for increasing contact area A, with γ>0. Our main finding is that the value of f(m), controlling the survival of stick slip at large size, can be evaluated by simulations of comparably small size. MD simulations of soft lubricant sliding are presented, which verify this theory.

  4. Very Large-Scale Neighborhoods with Performance Guarantees for Minimizing Makespan on Parallel Machines

    NARCIS (Netherlands)

    Brueggemann, T.; Hurink, Johann L.; Vredeveld, T.; Woeginger, Gerhard

    2006-01-01

    We study the problem of minimizing the makespan on m parallel machines. We introduce a very large-scale neighborhood of exponential size (in the number of machines) that is based on a matching in a complete graph. The idea is to partition the jobs assigned to the same machine into two sets. This

  5. Large-scale Ising-machines composed of magnetic neurons

    Science.gov (United States)

    Mizushima, Koichi; Goto, Hayato; Sato, Rie

    2017-10-01

    We propose Ising-machines composed of magnetic neurons, that is, magnetic bits in a recording track. In large-scale machines, the sizes of both neurons and synapses need to be reduced, and neat and smart connections among neurons are also required to achieve all-to-all connectivity among them. These requirements can be fulfilled by adopting magnetic recording technologies such as race-track memories and skyrmion tracks because the area of a magnetic bit is almost two orders of magnitude smaller than that of static random access memory, which has normally been used as a semiconductor neuron, and the smart connections among neurons are realized by using the read and write methods of these technologies.

  6. Finite Size Scaling of Perceptron

    OpenAIRE

    Korutcheva, Elka; Tonchev, N.

    2000-01-01

    We study the first-order transition in the model of a simple perceptron with continuous weights and large, bit finite value of the inputs. Making the analogy with the usual finite-size physical systems, we calculate the shift and the rounding exponents near the transition point. In the case of a general perceptron with larger variety of inputs, the analysis only gives bounds for the exponents.

  7. Comparison of Machine Learning Techniques in Inferring Phytoplankton Size Classes

    Directory of Open Access Journals (Sweden)

    Shuibo Hu

    2018-03-01

    Full Text Available The size of phytoplankton not only influences its physiology, metabolic rates and marine food web, but also serves as an indicator of phytoplankton functional roles in ecological and biogeochemical processes. Therefore, some algorithms have been developed to infer the synoptic distribution of phytoplankton cell size, denoted as phytoplankton size classes (PSCs, in surface ocean waters, by the means of remotely sensed variables. This study, using the NASA bio-Optical Marine Algorithm Data set (NOMAD high performance liquid chromatography (HPLC database, and satellite match-ups, aimed to compare the effectiveness of modeling techniques, including partial least square (PLS, artificial neural networks (ANN, support vector machine (SVM and random forests (RF, and feature selection techniques, including genetic algorithm (GA, successive projection algorithm (SPA and recursive feature elimination based on support vector machine (SVM-RFE, for inferring PSCs from remote sensing data. Results showed that: (1 SVM-RFE worked better in selecting sensitive features; (2 RF performed better than PLS, ANN and SVM in calibrating PSCs retrieval models; (3 machine learning techniques produced better performance than the chlorophyll-a based three-component method; (4 sea surface temperature, wind stress, and spectral curvature derived from the remote sensing reflectance at 490, 510, and 555 nm were among the most sensitive features to PSCs; and (5 the combination of SVM-RFE feature selection techniques and random forests regression was recommended for inferring PSCs. This study demonstrated the effectiveness of machine learning techniques in selecting sensitive features and calibrating models for PSCs estimations with remote sensing.

  8. Teraflop-scale Incremental Machine Learning

    OpenAIRE

    Özkural, Eray

    2011-01-01

    We propose a long-term memory design for artificial general intelligence based on Solomonoff's incremental machine learning methods. We use R5RS Scheme and its standard library with a few omissions as the reference machine. We introduce a Levin Search variant based on Stochastic Context Free Grammar together with four synergistic update algorithms that use the same grammar as a guiding probability distribution of programs. The update algorithms include adjusting production probabilities, re-u...

  9. Visuomotor Dissociation in Cerebral Scaling of Size

    NARCIS (Netherlands)

    Potgieser, Adriaan R. E.; de Jong, Bauke M.

    2016-01-01

    Estimating size and distance is crucial in effective visuomotor control. The concept of an internal coordinate system implies that visual and motor size parameters are scaled onto a common template. To dissociate perceptual and motor components in such scaling, we performed an fMRI experiment in

  10. Newton Methods for Large Scale Problems in Machine Learning

    Science.gov (United States)

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  11. Large-Scale Machine Learning for Classification and Search

    Science.gov (United States)

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  12. Fault size classification of rotating machinery using support vector machine

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Y. S.; Lee, D. H.; Park, S. K. [Korea Hydro and Nuclear Power Co. Ltd., Daejeon (Korea, Republic of)

    2012-03-15

    Studies on fault diagnosis of rotating machinery have been carried out to obtain a machinery condition in two ways. First is a classical approach based on signal processing and analysis using vibration and acoustic signals. Second is to use artificial intelligence techniques to classify machinery conditions into normal or one of the pre-determined fault conditions. Support Vector Machine (SVM) is well known as intelligent classifier with robust generalization ability. In this study, a two-step approach is proposed to predict fault types and fault sizes of rotating machinery in nuclear power plants using multi-class SVM technique. The model firstly classifies normal and 12 fault types and then identifies their sizes in case of predicting any faults. The time and frequency domain features are extracted from the measured vibration signals and used as input to SVM. A test rig is used to simulate normal and the well-know 12 artificial fault conditions with three to six fault sizes of rotating machinery. The application results to the test data show that the present method can estimate fault types as well as fault sizes with high accuracy for bearing an shaft-related faults and misalignment. Further research, however, is required to identify fault size in case of unbalance, rubbing, looseness, and coupling-related faults.

  13. Fault size classification of rotating machinery using support vector machine

    International Nuclear Information System (INIS)

    Kim, Y. S.; Lee, D. H.; Park, S. K.

    2012-01-01

    Studies on fault diagnosis of rotating machinery have been carried out to obtain a machinery condition in two ways. First is a classical approach based on signal processing and analysis using vibration and acoustic signals. Second is to use artificial intelligence techniques to classify machinery conditions into normal or one of the pre-determined fault conditions. Support Vector Machine (SVM) is well known as intelligent classifier with robust generalization ability. In this study, a two-step approach is proposed to predict fault types and fault sizes of rotating machinery in nuclear power plants using multi-class SVM technique. The model firstly classifies normal and 12 fault types and then identifies their sizes in case of predicting any faults. The time and frequency domain features are extracted from the measured vibration signals and used as input to SVM. A test rig is used to simulate normal and the well-know 12 artificial fault conditions with three to six fault sizes of rotating machinery. The application results to the test data show that the present method can estimate fault types as well as fault sizes with high accuracy for bearing an shaft-related faults and misalignment. Further research, however, is required to identify fault size in case of unbalance, rubbing, looseness, and coupling-related faults

  14. Visuomotor Dissociation in Cerebral Scaling of Size.

    Science.gov (United States)

    Potgieser, Adriaan R E; de Jong, Bauke M

    2016-01-01

    Estimating size and distance is crucial in effective visuomotor control. The concept of an internal coordinate system implies that visual and motor size parameters are scaled onto a common template. To dissociate perceptual and motor components in such scaling, we performed an fMRI experiment in which 16 right-handed subjects copied geometric figures while the result of drawing remained out of sight. Either the size of the example figure varied while maintaining a constant size of drawing (visual incongruity) or the size of the examples remained constant while subjects were instructed to make changes in size (motor incongruity). These incongruent were compared to congruent conditions. Statistical Parametric Mapping (SPM8) revealed brain activations related to size incongruity in the dorsolateral prefrontal and inferior parietal cortex, pre-SMA / anterior cingulate and anterior insula, dominant in the right hemisphere. This pattern represented simultaneous use of a 'resized' virtual template and actual picture information requiring spatial working memory, early-stage attention shifting and inhibitory control. Activations were strongest in motor incongruity while right pre-dorsal premotor activation specifically occurred in this condition. Visual incongruity additionally relied on a ventral visual pathway. Left ventral premotor activation occurred in all variably sized drawing while constant visuomotor size, compared to congruent size variation, uniquely activated the lateral occipital cortex additional to superior parietal regions. These results highlight size as a fundamental parameter in both general hand movement and movement guided by objects perceived in the context of surrounding 3D space.

  15. Visuomotor Dissociation in Cerebral Scaling of Size.

    Directory of Open Access Journals (Sweden)

    Adriaan R E Potgieser

    Full Text Available Estimating size and distance is crucial in effective visuomotor control. The concept of an internal coordinate system implies that visual and motor size parameters are scaled onto a common template. To dissociate perceptual and motor components in such scaling, we performed an fMRI experiment in which 16 right-handed subjects copied geometric figures while the result of drawing remained out of sight. Either the size of the example figure varied while maintaining a constant size of drawing (visual incongruity or the size of the examples remained constant while subjects were instructed to make changes in size (motor incongruity. These incongruent were compared to congruent conditions. Statistical Parametric Mapping (SPM8 revealed brain activations related to size incongruity in the dorsolateral prefrontal and inferior parietal cortex, pre-SMA / anterior cingulate and anterior insula, dominant in the right hemisphere. This pattern represented simultaneous use of a 'resized' virtual template and actual picture information requiring spatial working memory, early-stage attention shifting and inhibitory control. Activations were strongest in motor incongruity while right pre-dorsal premotor activation specifically occurred in this condition. Visual incongruity additionally relied on a ventral visual pathway. Left ventral premotor activation occurred in all variably sized drawing while constant visuomotor size, compared to congruent size variation, uniquely activated the lateral occipital cortex additional to superior parietal regions. These results highlight size as a fundamental parameter in both general hand movement and movement guided by objects perceived in the context of surrounding 3D space.

  16. Size-scaling of tensile failure stress in boron carbide

    Energy Technology Data Exchange (ETDEWEB)

    Wereszczak, Andrew A [ORNL; Kirkland, Timothy Philip [ORNL; Strong, Kevin T [ORNL; Jadaan, Osama M. [University of Wisconsin, Platteville; Thompson, G. A. [U.S. Army Dental and Trauma Research Detachment, Greak Lakes

    2010-01-01

    Weibull strength-size-scaling in a rotary-ground, hot-pressed boron carbide is described when strength test coupons sampled effective areas from the very small (~ 0.001 square millimeters) to the very large (~ 40,000 square millimeters). Equibiaxial flexure and Hertzian testing were used for the strength testing. Characteristic strengths for several different specimen geometries are analyzed as a function of effective area. Characteristic strength was found to substantially increase with decreased effective area, and exhibited a bilinear relationship. Machining damage limited strength as measured with equibiaxial flexure testing for effective areas greater than ~ 1 mm2 and microstructural-scale flaws limited strength for effective areas less than 0.1 mm2 for the Hertzian testing. The selections of a ceramic strength to account for ballistically-induced tile deflection and to account for expanding cavity modeling are considered in context with the measured strength-size-scaling.

  17. Finite size scaling and lattice gauge theory

    International Nuclear Information System (INIS)

    Berg, B.A.

    1986-01-01

    Finite size (Fisher) scaling is investigated for four dimensional SU(2) and SU(3) lattice gauge theories without quarks. It allows to disentangle violations of (asymptotic) scaling and finite volume corrections. Mass spectrum, string tension, deconfinement temperature and lattice β-function are considered. For appropriate volumes, Monte Carlo investigations seem to be able to control the finite volume continuum limit. Contact is made with Luescher's small volume expansion and possibly also with the asymptotic large volume behavior. 41 refs., 19 figs

  18. Less is more: regularization perspectives on large scale machine learning

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Deep learning based techniques provide a possible solution at the expanse of theoretical guidance and, especially, of computational requirements. It is then a key challenge for large scale machine learning to devise approaches guaranteed to be accurate and yet computationally efficient. In this talk, we will consider a regularization perspectives on machine learning appealing to classical ideas in linear algebra and inverse problems to scale-up dramatically nonparametric methods such as kernel methods, often dismissed because of prohibitive costs. Our analysis derives optimal theoretical guarantees while providing experimental results at par or out-performing state of the art approaches.

  19. Finite size scaling and spectral density studies

    International Nuclear Information System (INIS)

    Berg, B.A.

    1991-01-01

    Finite size scaling (FSS) and spectral density (SD) studies are reported for the deconfining phase transition. This talk concentrates on Monte Carlo (MC) results for pure SU(3) gauge theory, obtained in collaboration with Alves and Sanielevici, but the methods are expected to be useful for full QCD as well. (orig.)

  20. Finite size scaling and phenomenological renormalization

    International Nuclear Information System (INIS)

    Derrida, B.; Seze, L. de; Vannimenus, J.

    1981-05-01

    The basic equations of the phenomenological renormalization method are recalled. A simple derivation using finite-size scaling is presented. The convergence of the method is studied analytically for the Ising model. Using this method we give predictions for the 2d bond percolation. Finally we discuss how the method can be applied to random systems

  1. SIZE SCALING RELATIONSHIPS IN FRACTURE NETWORKS

    International Nuclear Information System (INIS)

    Wilson, Thomas H.

    2000-01-01

    The research conducted under DOE grant DE-FG26-98FT40385 provides a detailed assessment of size scaling issues in natural fracture and active fault networks that extend over scales from several tens of kilometers to less than a tenth of a meter. This study incorporates analysis of data obtained from several sources, including: natural fracture patterns photographed in the Appalachian field area, natural fracture patterns presented by other workers in the published literature, patterns of active faulting in Japan mapping at a scale of 1:100,000, and lineament patterns interpreted from satellite-based radar imagery obtained over the Appalachian field area. The complexity of these patterns is always found to vary with scale. In general,but not always, patterns become less complex with scale. This tendency may reverse as can be inferred from the complexity of high-resolution radar images (8 meter pixel size) which are characterized by patterns that are less complex than those observed over smaller areas on the ground surface. Model studies reveal that changes in the complexity of a fracture pattern can be associated with dominant spacings between the fractures comprising the pattern or roughly to the rock areas bounded by fractures of a certain scale. While the results do not offer a magic number (the fractal dimension) to characterize fracture networks at all scales, the modeling and analysis provide results that can be interpreted directly in terms of the physical properties of the natural fracture or active fault complex. These breaks roughly define the size of fracture bounded regions at different scales. The larger more extensive sets of fractures will intersect and enclose regions of a certain size, whereas smaller less extensive sets will do the same--i.e. subdivide the rock into even smaller regions. The interpretation varies depending on the number of sets that are present, but the scale breaks in the logN/logr plots serve as a guide to interpreting the

  2. A method of size inspection for fruit with machine vision

    Science.gov (United States)

    Rao, Xiuqin; Ying, Yibin

    2005-11-01

    A real time machine vision system for fruit quality inspection was developed, which consists of rollers, an encoder, a lighting chamber, a TMS-7DSP CCD camera (PULNIX Inc.), a computer (P4 1.8G, 128M) and a set of grading controller. An image was binary, and the edge was detected with line-scanned based digit image description, and the MER was applied to detected size of the fruit, but failed. The reason for the result was that the test point with MER was different from which was done with vernier caliper. An improved method was developed, which was called as software vernier caliper. A line between weight O of the fruit and a point A on the edge was drawn, and then the crossed point between line OA and the edge was calculated, which was noted as B, a point C between AB was selected, and the point D on the other side was searched by a way to make CD was vertical to AB, by move the point C between point A and B, A new point D was searched. The maximum length of CD was recorded as an extremum value. By move point A from start to the half point on the edge, a serial of CD was gotten. 80 navel oranges were tested, the maximum error of the diameter was less than 1mm.

  3. Development of large size NC trepanning and horning machine

    International Nuclear Information System (INIS)

    Wada, Yoshiei; Aono, Fumiaki; Siga, Toshihiko; Sudo, Eiichi; Takasa, Seiju; Fukuyama, Masaaki; Sibukawa, Koichi; Nakagawa, Hirokatu

    2010-01-01

    Due to the recent increase in world energy demand, construction of considerable number of nuclear and fossil power plant has been proceeded and is further planned. High generating capacity plant requires large forged components such as monoblock turbine rotor shafts and the dimensions of them tend to increase. Some of these components have center bore for material test, NDE and other use. In order to cope with the increase in production of these large forgings with center bores, a new trepanning machine, which exclusively bore a deep hole, was developed in JSW taking account of many accumulated experiences and know-how of experts. The machine is the world largest 400t trepanning and horning machine with numerical control and has many advantage in safety, the machining precision, machining efficiency, operability, labor-saving, and energy saving. Furthermore, transfer of the technical skill became easy through concentrated monitoring system based on numerically analysed experts' know-how. (author)

  4. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  5. Chemically intuited, large-scale screening of MOFs by machine learning techniques

    Science.gov (United States)

    Borboudakis, Giorgos; Stergiannakos, Taxiarchis; Frysali, Maria; Klontzas, Emmanuel; Tsamardinos, Ioannis; Froudakis, George E.

    2017-10-01

    A novel computational methodology for large-scale screening of MOFs is applied to gas storage with the use of machine learning technologies. This approach is a promising trade-off between the accuracy of ab initio methods and the speed of classical approaches, strategically combined with chemical intuition. The results demonstrate that the chemical properties of MOFs are indeed predictable (stochastically, not deterministically) using machine learning methods and automated analysis protocols, with the accuracy of predictions increasing with sample size. Our initial results indicate that this methodology is promising to apply not only to gas storage in MOFs but in many other material science projects.

  6. Optimizing Distributed Machine Learning for Large Scale EEG Data Set

    Directory of Open Access Journals (Sweden)

    M Bilal Shaikh

    2017-06-01

    Full Text Available Distributed Machine Learning (DML has gained its importance more than ever in this era of Big Data. There are a lot of challenges to scale machine learning techniques on distributed platforms. When it comes to scalability, improving the processor technology for high level computation of data is at its limit, however increasing machine nodes and distributing data along with computation looks as a viable solution. Different frameworks   and platforms are available to solve DML problems. These platforms provide automated random data distribution of datasets which miss the power of user defined intelligent data partitioning based on domain knowledge. We have conducted an empirical study which uses an EEG Data Set collected through P300 Speller component of an ERP (Event Related Potential which is widely used in BCI problems; it helps in translating the intention of subject w h i l e performing any cognitive task. EEG data contains noise due to waves generated by other activities in the brain which contaminates true P300Speller. Use of Machine Learning techniques could help in detecting errors made by P300 Speller. We are solving this classification problem by partitioning data into different chunks and preparing distributed models using Elastic CV Classifier. To present a case of optimizing distributed machine learning, we propose an intelligent user defined data partitioning approach that could impact on the accuracy of distributed machine learners on average. Our results show better average AUC as compared to average AUC obtained after applying random data partitioning which gives no control to user over data partitioning. It improves the average accuracy of distributed learner due to the domain specific intelligent partitioning by the user. Our customized approach achieves 0.66 AUC on individual sessions and 0.75 AUC on mixed sessions, whereas random / uncontrolled data distribution records 0.63 AUC.

  7. New Balancing Equipment for Mass Production of Small and Medium-Sized Electrical Machines

    DEFF Research Database (Denmark)

    Argeseanu, Alin; Ritchie, Ewen; Leban, Krisztina Monika

    2010-01-01

    The level of vibration and noise is an important feature. It is good practice to explain the significance of the indicators of the quality of electrical machines. The mass production of small and medium-sized electrical machines demands speed (short typical measurement time), reliability...

  8. Molecular-Sized DNA or RNA Sequencing Machine | NCI Technology Transfer Center | TTC

    Science.gov (United States)

    The National Cancer Institute's Gene Regulation and Chromosome Biology Laboratory is seeking statements of capability or interest from parties interested in collaborative research to co-develop a molecular-sized DNA or RNA sequencing machine.

  9. Downscaling Coarse Scale Microwave Soil Moisture Product using Machine Learning

    Science.gov (United States)

    Abbaszadeh, P.; Moradkhani, H.; Yan, H.

    2016-12-01

    Soil moisture (SM) is a key variable in partitioning and examining the global water-energy cycle, agricultural planning, and water resource management. It is also strongly coupled with climate change, playing an important role in weather forecasting and drought monitoring and prediction, flood modeling and irrigation management. Although satellite retrievals can provide an unprecedented information of soil moisture at a global-scale, the products might be inadequate for basin scale study or regional assessment. To improve the spatial resolution of SM, this work presents a novel approach based on Machine Learning (ML) technique that allows for downscaling of the satellite soil moisture to fine resolution. For this purpose, the SMAP L-band radiometer SM products were used and conditioned on the Variable Infiltration Capacity (VIC) model prediction to describe the relationship between the coarse and fine scale soil moisture data. The proposed downscaling approach was applied to a western US basin and the products were compared against the available SM data from in-situ gauge stations. The obtained results indicated a great potential of the machine learning technique to derive the fine resolution soil moisture information that is currently used for land data assimilation applications.

  10. Classification of large-sized hyperspectral imagery using fast machine learning algorithms

    Science.gov (United States)

    Xia, Junshi; Yokoya, Naoto; Iwasaki, Akira

    2017-07-01

    We present a framework of fast machine learning algorithms in the context of large-sized hyperspectral images classification from the theoretical to a practical viewpoint. In particular, we assess the performance of random forest (RF), rotation forest (RoF), and extreme learning machine (ELM) and the ensembles of RF and ELM. These classifiers are applied to two large-sized hyperspectral images and compared to the support vector machines. To give the quantitative analysis, we pay attention to comparing these methods when working with high input dimensions and a limited/sufficient training set. Moreover, other important issues such as the computational cost and robustness against the noise are also discussed.

  11. Single product lot-sizing on unrelated parallel machines with non-decreasing processing times

    Science.gov (United States)

    Eremeev, A.; Kovalyov, M.; Kuznetsov, P.

    2018-01-01

    We consider a problem in which at least a given quantity of a single product has to be partitioned into lots, and lots have to be assigned to unrelated parallel machines for processing. In one version of the problem, the maximum machine completion time should be minimized, in another version of the problem, the sum of machine completion times is to be minimized. Machine-dependent lower and upper bounds on the lot size are given. The product is either assumed to be continuously divisible or discrete. The processing time of each machine is defined by an increasing function of the lot volume, given as an oracle. Setup times and costs are assumed to be negligibly small, and therefore, they are not considered. We derive optimal polynomial time algorithms for several special cases of the problem. An NP-hard case is shown to admit a fully polynomial time approximation scheme. An application of the problem in energy efficient processors scheduling is considered.

  12. Zooniverse - Web scale citizen science with people and machines. (Invited)

    Science.gov (United States)

    Smith, A.; Lynn, S.; Lintott, C.; Simpson, R.

    2013-12-01

    The Zooniverse (zooniverse.org) began in 2007 with the launch of Galaxy Zoo, a project in which more than 175,000 people provided shape analyses of more than 1 million galaxy images sourced from the Sloan Digital Sky Survey. These galaxy 'classifications', some 60 million in total, have since been used to produce more than 50 peer-reviewed publications based not only on the original research goals of the project but also because of serendipitous discoveries made by the volunteer community. Based upon the success of Galaxy Zoo the team have gone on to develop more than 25 web-based citizen science projects, all with a strong research focus in a range of subjects from astronomy to zoology where human-based analysis still exceeds that of machine intelligence. Over the past 6 years Zooniverse projects have collected more than 300 million data analyses from over 1 million volunteers providing fantastically rich datasets for not only the individuals working to produce research from their project but also the machine learning and computer vision research communities. The Zooniverse platform has always been developed to be the 'simplest thing that works' implementing only the most rudimentary algorithms for functionality such as task allocation and user-performance metrics - simplifications necessary to scale the Zooniverse such that the core team of developers and data scientists can remain small and the cost of running the computing infrastructure relatively modest. To date these simplifications have been appropriate for the data volumes and analysis tasks being addressed. This situation however is changing: next generation telescopes such as the Large Synoptic Sky Telescope (LSST) will produce data volumes dwarfing those previously analyzed. If citizen science is to have a part to play in analyzing these next-generation datasets then the Zooniverse will need to evolve into a smarter system capable for example of modeling the abilities of users and the complexities of

  13. Separating the Classes of Recursively Enumerable Languages Based on Machine Size

    Czech Academy of Sciences Publication Activity Database

    van Leeuwen, J.; Wiedermann, Jiří

    2015-01-01

    Roč. 26, č. 6 (2015), s. 677-695 ISSN 0129-0541 R&D Projects: GA ČR GAP202/10/1333 Grant - others:GA ČR(CZ) GA15-04960S Institutional support: RVO:67985807 Keywords : recursively enumerable languages * RE hierarchy * finite languages * machine size * descriptional complexity * Turing machines with advice Subject RIV: IN - Informatics, Computer Science Impact factor: 0.467, year: 2015

  14. Scaling the drop size in coflow experiments

    International Nuclear Information System (INIS)

    Castro-Hernandez, E; Gordillo, J M; Gundabala, V; Fernandez-Nieves, A

    2009-01-01

    We perform extensive experiments with coflowing liquids in microfluidic devices and provide a closed expression for the drop size as a function of measurable parameters in the jetting regime that accounts for the experimental observations; this expression works irrespective of how the jets are produced, providing a powerful design tool for this type of experiments.

  15. Scaling the drop size in coflow experiments

    Energy Technology Data Exchange (ETDEWEB)

    Castro-Hernandez, E; Gordillo, J M [Area de Mecanica de Fluidos, Universidad de Sevilla, Avenida de los Descubrimientos s/n, 41092 Sevilla (Spain); Gundabala, V; Fernandez-Nieves, A [School of Physics, Georgia Institute of Technology, Atlanta, GA 30332 (United States)], E-mail: jgordill@us.es

    2009-07-15

    We perform extensive experiments with coflowing liquids in microfluidic devices and provide a closed expression for the drop size as a function of measurable parameters in the jetting regime that accounts for the experimental observations; this expression works irrespective of how the jets are produced, providing a powerful design tool for this type of experiments.

  16. Transportation and Production Lot-size for Sugarcane under Uncertainty of Machine Capacity

    Directory of Open Access Journals (Sweden)

    Sudtachat Kanchala

    2018-01-01

    Full Text Available The integrated transportation and production lot size problems is important effect to total cost of operation system for sugar factories. In this research, we formulate a mathematic model that combines these two problems as two stage stochastic programming model. In the first stage, we determine the lot size of transportation problem and allocate a fixed number of vehicles to transport sugarcane to the mill factory. Moreover, we consider an uncertainty of machine (mill capacities. After machine (mill capacities realized, in the second stage we determine the production lot size and make decision to hold units of sugarcane in front of mills based on discrete random variables of machine (mill capacities. We investigate the model using a small size problem. The results show that the optimal solutions try to choose closest fields and lower holding cost per unit (at fields to transport sugarcane to mill factory. We show the results of comparison of our model and the worst case model (full capacity. The results show that our model provides better efficiency than the results of the worst case model.

  17. Real-time spot size camera for pulsed high-energy radiographic machines

    International Nuclear Information System (INIS)

    Watson, S.A.

    1993-01-01

    The focal spot size of an x-ray source is a critical parameter which degrades resolution in a flash radiograph. For best results, a small round focal spot is required. Therefore, a fast and accurate measurement of the spot size is highly desirable to facilitate machine tuning. This paper describes two systems developed for Los Alamos National Laboratory's Pulsed High-Energy Radiographic Machine Emitting X-rays (PHERMEX) facility. The first uses a CCD camera combined with high-brightness floors, while the second utilizes phosphor storage screens. Other techniques typically record only the line spread function on radiographic film, while systems in this paper measure the more general two-dimensional point-spread function and associated modulation transfer function in real time for shot-to-shot comparison

  18. TensorFlow: A system for large-scale machine learning

    OpenAIRE

    Abadi, Martín; Barham, Paul; Chen, Jianmin; Chen, Zhifeng; Davis, Andy; Dean, Jeffrey; Devin, Matthieu; Ghemawat, Sanjay; Irving, Geoffrey; Isard, Michael; Kudlur, Manjunath; Levenberg, Josh; Monga, Rajat; Moore, Sherry; Murray, Derek G.

    2016-01-01

    TensorFlow is a machine learning system that operates at large scale and in heterogeneous environments. TensorFlow uses dataflow graphs to represent computation, shared state, and the operations that mutate that state. It maps the nodes of a dataflow graph across many machines in a cluster, and within a machine across multiple computational devices, including multicore CPUs, general-purpose GPUs, and custom designed ASICs known as Tensor Processing Units (TPUs). This architecture gives flexib...

  19. High-precision micro/nano-scale machining system

    Science.gov (United States)

    Kapoor, Shiv G.; Bourne, Keith Allen; DeVor, Richard E.

    2014-08-19

    A high precision micro/nanoscale machining system. A multi-axis movement machine provides relative movement along multiple axes between a workpiece and a tool holder. A cutting tool is disposed on a flexible cantilever held by the tool holder, the tool holder being movable to provide at least two of the axes to set the angle and distance of the cutting tool relative to the workpiece. A feedback control system uses measurement of deflection of the cantilever during cutting to maintain a desired cantilever deflection and hence a desired load on the cutting tool.

  20. Large-scale Machine Learning in High-dimensional Datasets

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen

    Over the last few decades computers have gotten to play an essential role in our daily life, and data is now being collected in various domains at a faster pace than ever before. This dissertation presents research advances in four machine learning fields that all relate to the challenges imposed...... are better at modeling local heterogeneities. In the field of machine learning for neuroimaging, we introduce learning protocols for real-time functional Magnetic Resonance Imaging (fMRI) that allow for dynamic intervention in the human decision process. Specifically, the model exploits the structure of f...

  1. Graphene-based bimorphs for micron-sized, autonomous origami machines.

    Science.gov (United States)

    Miskin, Marc Z; Dorsey, Kyle J; Bircan, Baris; Han, Yimo; Muller, David A; McEuen, Paul L; Cohen, Itai

    2018-01-16

    Origami-inspired fabrication presents an attractive platform for miniaturizing machines: thinner layers of folding material lead to smaller devices, provided that key functional aspects, such as conductivity, stiffness, and flexibility, are persevered. Here, we show origami fabrication at its ultimate limit by using 2D atomic membranes as a folding material. As a prototype, we bond graphene sheets to nanometer-thick layers of glass to make ultrathin bimorph actuators that bend to micrometer radii of curvature in response to small strain differentials. These strains are two orders of magnitude lower than the fracture threshold for the device, thus maintaining conductivity across the structure. By patterning 2-[Formula: see text]m-thick rigid panels on top of bimorphs, we localize bending to the unpatterned regions to produce folds. Although the graphene bimorphs are only nanometers thick, they can lift these panels, the weight equivalent of a 500-nm-thick silicon chip. Using panels and bimorphs, we can scale down existing origami patterns to produce a wide range of machines. These machines change shape in fractions of a second when crossing a tunable pH threshold, showing that they sense their environments, respond, and perform useful functions on time and length scales comparable with microscale biological organisms. With the incorporation of electronic, photonic, and chemical payloads, these basic elements will become a powerful platform for robotics at the micrometer scale.

  2. Large-scale ocean connectivity and planktonic body size

    KAUST Repository

    Villarino, Ernesto

    2018-01-04

    Global patterns of planktonic diversity are mainly determined by the dispersal of propagules with ocean currents. However, the role that abundance and body size play in determining spatial patterns of diversity remains unclear. Here we analyse spatial community structure - β-diversity - for several planktonic and nektonic organisms from prokaryotes to small mesopelagic fishes collected during the Malaspina 2010 Expedition. β-diversity was compared to surface ocean transit times derived from a global circulation model, revealing a significant negative relationship that is stronger than environmental differences. Estimated dispersal scales for different groups show a negative correlation with body size, where less abundant large-bodied communities have significantly shorter dispersal scales and larger species spatial turnover rates than more abundant small-bodied plankton. Our results confirm that the dispersal scale of planktonic and micro-nektonic organisms is determined by local abundance, which scales with body size, ultimately setting global spatial patterns of diversity.

  3. Large-scale ocean connectivity and planktonic body size

    KAUST Repository

    Villarino, Ernesto; Watson, James R.; Jö nsson, Bror; Gasol, Josep M.; Salazar, Guillem; Acinas, Silvia G.; Estrada, Marta; Massana, Ramó n; Logares, Ramiro; Giner, Caterina R.; Pernice, Massimo C.; Olivar, M. Pilar; Citores, Leire; Corell, Jon; Rodrí guez-Ezpeleta, Naiara; Acuñ a, José Luis; Molina-Ramí rez, Axayacatl; Gonzá lez-Gordillo, J. Ignacio; Có zar, André s; Martí , Elisa; Cuesta, José A.; Agusti, Susana; Fraile-Nuez, Eugenio; Duarte, Carlos M.; Irigoien, Xabier; Chust, Guillem

    2018-01-01

    Global patterns of planktonic diversity are mainly determined by the dispersal of propagules with ocean currents. However, the role that abundance and body size play in determining spatial patterns of diversity remains unclear. Here we analyse spatial community structure - β-diversity - for several planktonic and nektonic organisms from prokaryotes to small mesopelagic fishes collected during the Malaspina 2010 Expedition. β-diversity was compared to surface ocean transit times derived from a global circulation model, revealing a significant negative relationship that is stronger than environmental differences. Estimated dispersal scales for different groups show a negative correlation with body size, where less abundant large-bodied communities have significantly shorter dispersal scales and larger species spatial turnover rates than more abundant small-bodied plankton. Our results confirm that the dispersal scale of planktonic and micro-nektonic organisms is determined by local abundance, which scales with body size, ultimately setting global spatial patterns of diversity.

  4. Particle size of radioactive aerosols generated during machine operation in high-energy proton accelerators

    International Nuclear Information System (INIS)

    Oki, Yuichi; Kanda, Yukio; Kondo, Kenjiro; Endo, Akira

    2000-01-01

    In high-energy accelerators, non-radioactive aerosols are abundantly generated due to high radiation doses during machine operation. Under such a condition, radioactive atoms, which are produced through various nuclear reactions in the air of accelerator tunnels, form radioactive aerosols. These aerosols might be inhaled by workers who enter the tunnel just after the beam stop. Their particle size is very important information for estimation of internal exposure doses. In this work, focusing on typical radionuclides such as 7 Be and 24 Na, their particle size distributions are studied. An aluminum chamber was placed in the EP2 beam line of the 12-GeV proton synchrotron at High Energy Accelerator Research Organization (KEK). Aerosol-free air was introduced to the chamber, and aerosols formed in the chamber were sampled during machine operation. A screen-type diffusion battery was employed in the aerosol-size analysis. Assuming that the aerosols have log-normal size distributions, their size distributions were obtained from the radioactivity concentrations at the entrance and exit of the diffusion battery. Radioactivity of the aerosols was measured with Ge detector system, and concentrations of non-radioactive aerosols were obtained using condensation particle counter (CPC). The aerosol size (radius) for 7 Be and 24 Na was found to be 0.01-0.04 μm, and was always larger than that for non-radioactive aerosols. The concentration of non-radioactive aerosols was found to be 10 6 - 10 7 particles/cm 3 . The size for radioactive aerosols was much smaller than ordinary atmospheric aerosols. Internal doses due to inhalation of the radioactive aerosols were estimated, based on the respiratory tract model of ICRP Pub. 66. (author)

  5. The influence of negative training set size on machine learning-based virtual screening.

    Science.gov (United States)

    Kurczab, Rafał; Smusz, Sabina; Bojarski, Andrzej J

    2014-01-01

    The paper presents a thorough analysis of the influence of the number of negative training examples on the performance of machine learning methods. The impact of this rather neglected aspect of machine learning methods application was examined for sets containing a fixed number of positive and a varying number of negative examples randomly selected from the ZINC database. An increase in the ratio of positive to negative training instances was found to greatly influence most of the investigated evaluating parameters of ML methods in simulated virtual screening experiments. In a majority of cases, substantial increases in precision and MCC were observed in conjunction with some decreases in hit recall. The analysis of dynamics of those variations let us recommend an optimal composition of training data. The study was performed on several protein targets, 5 machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest) and 2 types of molecular fingerprints (MACCS and CDK FP). The most effective classification was provided by the combination of CDK FP with SMO or Random Forest algorithms. The Naïve Bayes models appeared to be hardly sensitive to changes in the number of negative instances in the training set. In conclusion, the ratio of positive to negative training instances should be taken into account during the preparation of machine learning experiments, as it might significantly influence the performance of particular classifier. What is more, the optimization of negative training set size can be applied as a boosting-like approach in machine learning-based virtual screening.

  6. Decreased attention to object size information in scale errors performers.

    Science.gov (United States)

    Grzyb, Beata J; Cangelosi, Angelo; Cattani, Allegra; Floccia, Caroline

    2017-05-01

    Young children sometimes make serious attempts to perform impossible actions on miniature objects as if they were full-size objects. The existing explanations of these curious action errors assume (but never explicitly tested) children's decreased attention to object size information. This study investigated the attention to object size information in scale errors performers. Two groups of children aged 18-25 months (N=52) and 48-60 months (N=23) were tested in two consecutive tasks: an action task that replicated the original scale errors elicitation situation, and a looking task that involved watching on a computer screen actions performed with adequate to inadequate size object. Our key finding - that children performing scale errors in the action task subsequently pay less attention to size changes than non-scale errors performers in the looking task - suggests that the origins of scale errors in childhood operate already at the perceptual level, and not at the action level. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Size structure, not metabolic scaling rules, determines fisheries reference points

    DEFF Research Database (Denmark)

    Andersen, Ken Haste; Beyer, Jan

    2015-01-01

    Impact assessments of fishing on a stock require parameterization of vital rates: growth, mortality and recruitment. For 'data-poor' stocks, vital rates may be estimated from empirical size-based relationships or from life-history invariants. However, a theoretical framework to synthesize...... these empirical relations is lacking. Here, we combine life-history invariants, metabolic scaling and size-spectrum theory to develop a general size- and trait-based theory for demography and recruitment of exploited fish stocks. Important concepts are physiological or metabolic scaled mortalities and flux...... is that larger species have a higher egg production per recruit than small species. This means that density dependence is stronger for large than for small species and has the consequence that fisheries reference points that incorporate recruitment do not obey metabolic scaling rules. This result implies...

  8. Development and psychometric evaluation of the breast size satisfaction scale.

    Science.gov (United States)

    Pahlevan Sharif, Saeed

    2017-10-09

    Purpose The purpose of this paper is to develop and evaluate psychometrically an instrument named the Breast Size Satisfaction Scale (BSSS) to assess breast size satisfaction. Design/methodology/approach The present scale was developed using a set of 16 computer-generated 3D images of breasts to overcome some of the limitations of existing instruments. The images were presented to participants and they were asked to select the figure that most accurately depicted their actual breast size and the figure that most closely represented their ideal breast size. Breast size satisfaction was computed by subtracting the absolute value of the difference between ideal and actual perceived size from 16, such that higher values indicate greater breast size satisfaction. Findings Study 1 ( n=65 female undergraduate students) showed good test-retest reliability and study 2 ( n=1,000 Iranian women, aged 18 years and above) provided support for convergent validity using a nomological network approach. Originality/value The BSSS demonstrated good psychometric properties and thus can be used in future studies to assess breast size satisfaction among women.

  9. Finite-size scaling of survival probability in branching processes

    OpenAIRE

    Garcia-Millan, Rosalba; Font-Clos, Francesc; Corral, Alvaro

    2014-01-01

    Branching processes pervade many models in statistical physics. We investigate the survival probability of a Galton-Watson branching process after a finite number of generations. We reveal the finite-size scaling law of the survival probability for a given branching process ruled by a probability distribution of the number of offspring per element whose standard deviation is finite, obtaining the exact scaling function as well as the critical exponents. Our findings prove the universal behavi...

  10. Perception of acoustic scale and size in musical instrument sounds.

    Science.gov (United States)

    van Dinther, Ralph; Patterson, Roy D

    2006-10-01

    There is size information in natural sounds. For example, as humans grow in height, their vocal tracts increase in length, producing a predictable decrease in the formant frequencies of speech sounds. Recent studies have shown that listeners can make fine discriminations about which of two speakers has the longer vocal tract, supporting the view that the auditory system discriminates changes on the acoustic-scale dimension. Listeners can also recognize vowels scaled well beyond the range of vocal tracts normally experienced, indicating that perception is robust to changes in acoustic scale. This paper reports two perceptual experiments designed to extend research on acoustic scale and size perception to the domain of musical sounds: The first study shows that listeners can discriminate the scale of musical instrument sounds reliably, although not quite as well as for voices. The second experiment shows that listeners can recognize the family of an instrument sound which has been modified in pitch and scale beyond the range of normal experience. We conclude that processing of acoustic scale in music perception is very similar to processing of acoustic scale in speech perception.

  11. Gene prediction in metagenomic fragments: A large scale machine learning approach

    Directory of Open Access Journals (Sweden)

    Morgenstern Burkhard

    2008-04-01

    Full Text Available Abstract Background Metagenomics is an approach to the characterization of microbial genomes via the direct isolation of genomic sequences from the environment without prior cultivation. The amount of metagenomic sequence data is growing fast while computational methods for metagenome analysis are still in their infancy. In contrast to genomic sequences of single species, which can usually be assembled and analyzed by many available methods, a large proportion of metagenome data remains as unassembled anonymous sequencing reads. One of the aims of all metagenomic sequencing projects is the identification of novel genes. Short length, for example, Sanger sequencing yields on average 700 bp fragments, and unknown phylogenetic origin of most fragments require approaches to gene prediction that are different from the currently available methods for genomes of single species. In particular, the large size of metagenomic samples requires fast and accurate methods with small numbers of false positive predictions. Results We introduce a novel gene prediction algorithm for metagenomic fragments based on a two-stage machine learning approach. In the first stage, we use linear discriminants for monocodon usage, dicodon usage and translation initiation sites to extract features from DNA sequences. In the second stage, an artificial neural network combines these features with open reading frame length and fragment GC-content to compute the probability that this open reading frame encodes a protein. This probability is used for the classification and scoring of gene candidates. With large scale training, our method provides fast single fragment predictions with good sensitivity and specificity on artificially fragmented genomic DNA. Additionally, this method is able to predict translation initiation sites accurately and distinguishes complete from incomplete genes with high reliability. Conclusion Large scale machine learning methods are well-suited for gene

  12. Scaling up machine learning: parallel and distributed approaches

    National Research Council Canada - National Science Library

    Bekkerman, Ron; Bilenko, Mikhail; Langford, John

    2012-01-01

    .... Demand for parallelizing learning algorithms is highly task-specific: in some settings it is driven by the enormous dataset sizes, in others by model complexity or by real-time performance requirements...

  13. Does water transport scale universally with tree size?

    Science.gov (United States)

    F.C. Meinzer; B.J. Bond; J.M. Warren; D.R. Woodruff

    2005-01-01

    1. We employed standardized measurement techniques and protocols to describe the size dependence of whole-tree water use and cross-sectional area of conducting xylem (sapwood) among several species of angiosperms and conifers. 2. The results were not inconsistent with previously proposed 314-power scaling of water transport with estimated above-...

  14. Machine Learning for Big Data: A Study to Understand Limits at Scale

    Energy Technology Data Exchange (ETDEWEB)

    Sukumar, Sreenivas R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Del-Castillo-Negrete, Carlos Emilio [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-21

    This report aims to empirically understand the limits of machine learning when applied to Big Data. We observe that recent innovations in being able to collect, access, organize, integrate, and query massive amounts of data from a wide variety of data sources have brought statistical data mining and machine learning under more scrutiny, evaluation and application for gleaning insights from the data than ever before. Much is expected from algorithms without understanding their limitations at scale while dealing with massive datasets. In that context, we pose and address the following questions How does a machine learning algorithm perform on measures such as accuracy and execution time with increasing sample size and feature dimensionality? Does training with more samples guarantee better accuracy? How many features to compute for a given problem? Do more features guarantee better accuracy? Do efforts to derive and calculate more features and train on larger samples worth the effort? As problems become more complex and traditional binary classification algorithms are replaced with multi-task, multi-class categorization algorithms do parallel learners perform better? What happens to the accuracy of the learning algorithm when trained to categorize multiple classes within the same feature space? Towards finding answers to these questions, we describe the design of an empirical study and present the results. We conclude with the following observations (i) accuracy of the learning algorithm increases with increasing sample size but saturates at a point, beyond which more samples do not contribute to better accuracy/learning, (ii) the richness of the feature space dictates performance - both accuracy and training time, (iii) increased dimensionality often reflected in better performance (higher accuracy in spite of longer training times) but the improvements are not commensurate the efforts for feature computation and training and (iv) accuracy of the learning algorithms

  15. Automated Bug Assignment: Ensemble-based Machine Learning in Large Scale Industrial Contexts

    OpenAIRE

    Jonsson, Leif; Borg, Markus; Broman, David; Sandahl, Kristian; Eldh, Sigrid; Runeson, Per

    2016-01-01

    Bug report assignment is an important part of software maintenance. In particular, incorrect assignments of bug reports to development teams can be very expensive in large software development projects. Several studies propose automating bug assignment techniques using machine learning in open source software contexts, but no study exists for large-scale proprietary projects in industry. The goal of this study is to evaluate automated bug assignment techniques that are based on machine learni...

  16. Non-machinery dialysis that achieves blood purification therapy without using full-scale dialysis machines.

    Science.gov (United States)

    Abe, Takaya; Onoda, Mistutaka; Matsuura, Tomohiko; Sugimura, Jun; Obara, Wataru; Sato, Toshiya; Takahashi, Mihoko; Chiba, Kenta; Abe, Tomiya

    2017-09-01

    An electrical or water supply and a blood purification machine are required for renal replacement therapy. There is a possibility that acute kidney injury can occur in large numbers and on a wide scale in the case of a massive earthquake, and there is the potential risk that the current supply will be unable to cope with acute kidney injury cases. However, non-machinery dialysis requires exclusive circuits and has the characteristic of not requiring the full-scale dialysis machines. We performed perfusion experiments that used non-machinery dialysis and recent blood purification machines in 30-min intervals, and the effectiveness of non-machinery dialysis was evaluated by the assessing the removal efficiency of potassium, which causes lethal arrhythmia during acute kidney injury. The non-machinery dialysis potassium removal rate was at the same level as continuous blood purification machines with a dialysate flow rate of 5 L/h after 15 min and continuous blood purification machines with a dialysate flow rate of 3 L/h after 30 min. Non-machinery dialysis required an exclusive dialysate circuit, the frequent need to replace bags, and new dialysate exchanged once every 30 min. However, it can be seen as an effective renal replacement therapy for crush-related acute kidney injury patients, even in locations or facilities not having the full-scale dialysis machines.

  17. Beliefs about penis size: validation of a scale for men ashamed about their penis size.

    Science.gov (United States)

    Veale, David; Eshkevari, Ertimiss; Read, Julie; Miles, Sarah; Troglia, Andrea; Phillips, Rachael; Echeverria, Lina Maria Carmona; Fiorito, Chiara; Wylie, Kevan; Muir, Gordon

    2014-01-01

    No measures are available for understanding beliefs in men who experience shame about the perceived size of their penis. Such a measure might be helpful for treatment planning, and measuring outcome after any psychological or physical intervention. Our aim was to validate a newly developed measure called the Beliefs about Penis Size Scale (BAPS). One hundred seventy-three male participants completed a new questionnaire consisting of 18 items to be validated and developed into the BAPS, as well as various other standardized measures. A urologist also measured actual penis size. The BAPS was validated against six psychosexual self-report questionnaires as well as penile size measurements. Exploratory factor analysis reduced the number of items in the BAPS from 18 to 10, which was best explained by one factor. The 10-item BAPS had good internal consistency and correlated significantly with measures of depression, anxiety, body image quality of life, social anxiety, erectile function, overall satisfaction, and the importance attached to penis size. The BAPS was not found to correlate with actual penis size. It was able to discriminate between those who had concerns or were dissatisfied about their penis size and those who were not. This is the first study to develop a scale for measurement of beliefs about penis size. It may be used as part of an assessment for men who experience shame about the perceived size of their penis and as an outcome measure after treatment. The BAPS measures various manifestations of masculinity and shame about their perceived penis size including internal self-evaluative beliefs; negative evaluation by others; anticipated consequences of a perceived small penis, and extreme self-consciousness. © 2013 International Society for Sexual Medicine.

  18. Finite-size scaling a collection of reprints

    CERN Document Server

    1988-01-01

    Over the past few years, finite-size scaling has become an increasingly important tool in studies of critical systems. This is partly due to an increased understanding of finite-size effects by analytical means, and partly due to our ability to treat larger systems with large computers. The aim of this volume was to collect those papers which have been important for this progress and which illustrate novel applications of the method. The emphasis has been placed on relatively recent developments, including the use of the &egr;-expansion and of conformal methods.

  19. Finite-size scaling of survival probability in branching processes.

    Science.gov (United States)

    Garcia-Millan, Rosalba; Font-Clos, Francesc; Corral, Álvaro

    2015-04-01

    Branching processes pervade many models in statistical physics. We investigate the survival probability of a Galton-Watson branching process after a finite number of generations. We derive analytically the existence of finite-size scaling for the survival probability as a function of the control parameter and the maximum number of generations, obtaining the critical exponents as well as the exact scaling function, which is G(y)=2ye(y)/(e(y)-1), with y the rescaled distance to the critical point. Our findings are valid for any branching process of the Galton-Watson type, independently of the distribution of the number of offspring, provided its variance is finite. This proves the universal behavior of the finite-size effects in branching processes, including the universality of the metric factors. The direct relation to mean-field percolation is also discussed.

  20. Size scaling of negative hydrogen ion sources for fusion

    Science.gov (United States)

    Fantz, U.; Franzen, P.; Kraus, W.; Schiesko, L.; Wimmer, C.; Wünderlich, D.

    2015-04-01

    The RF-driven negative hydrogen ion source (H-, D-) for the international fusion experiment ITER has a width of 0.9 m and a height of 1.9 m and is based on a ⅛ scale prototype source being in operation at the IPP test facilities BATMAN and MANITU for many years. Among the challenges to meet the required parameters in a caesiated source at a source pressure of 0.3 Pa or less is the challenge in size scaling of a factor of eight. As an intermediate step a ½ scale ITER source went into operation at the IPP test facility ELISE with the first plasma in February 2013. The experience and results gained so far at ELISE allowed a size scaling study from the prototype source towards the ITER relevant size at ELISE, in which operational issues, physical aspects and the source performance is addressed, highlighting differences as well as similarities. The most ITER relevant results are: low pressure operation down to 0.2 Pa is possible without problems; the magnetic filter field created by a current in the plasma grid is sufficient to reduce the electron temperature below the target value of 1 eV and to reduce together with the bias applied between the differently shaped bias plate and the plasma grid the amount of co-extracted electrons. An asymmetry of the co-extracted electron currents in the two grid segments is measured, varying strongly with filter field and bias. Contrary to the prototype source, a dedicated plasma drift in vertical direction is not observed. As in the prototype source, the performance in deuterium is limited by the amount of co-extracted electrons in short as well as in long pulse operation. Caesium conditioning is much harder in deuterium than in hydrogen for which fast and reproducible conditioning is achieved. First estimates reveal a caesium consumption comparable to the one in the prototype source despite the large size.

  1. Vertebral scale system to measure canine heart size in radiographs

    International Nuclear Information System (INIS)

    Buchanan, J.W.; Bucheler, J.

    1995-01-01

    A method for measuring canine heart size in radiographs was developed on the basis that there is a good correlation between heart size and body length regardless of the conformation of the thorax. The lengths of the long and short axes of the heart of 100 clinically normal dogs were determined with calipers, and the dimensions were scaled against the length of vertebrae dorsal to the heart beginning with T4. The sum of the long and short axes of the heart expressed as vertebral heart size was 9.7 +/- 0.5 vertebrae. The differences between dogs with a wide or deep thorax, males and females, and right or left lateral recumbency were not significant. The caudal vena cava was 0.75 vertebrae +/- 0.13 in comparison to the length of the vertebra over the tracheal bifurcation

  2. Finite-size scaling in two-dimensional superfluids

    International Nuclear Information System (INIS)

    Schultka, N.; Manousakis, E.

    1994-01-01

    Using the x-y model and a nonlocal updating scheme called cluster Monte Carlo, we calculate the superfluid density of a two-dimensional superfluid on large-size square lattices LxL up to 400x400. This technique allows us to approach temperatures close to the critical point, and by studying a wide range of L values and applying finite-size scaling theory we are able to extract the critical properties of the system. We calculate the superfluid density and from that we extract the renormalization-group beta function. We derive finite-size scaling expressions using the Kosterlitz-Thouless-Nelson renormalization group equations and show that they are in very good agreement with our numerical results. This allows us to extrapolate our results to the infinite-size limit. We also find that the universal discontinuity of the superfluid density at the critical temperature is in very good agreement with the Kosterlitz-Thouless-Nelson calculation and experiments

  3. Topological and sizing optimization of reinforced ribs for a machining centre

    Science.gov (United States)

    Chen, T. Y.; Wang, C. B.

    2008-01-01

    The topology optimization technique is applied to improve rib designs of a machining centre. The ribs of the original design are eliminated and new ribs are generated by topology optimization in the same 3D design space containing the original ribs. Two-dimensional plate elements are used to replace the optimum rib topologies formed by 3D rectangular elements. After topology optimization, sizing optimization is used to determine the optimum thicknesses of the ribs. When forming the optimum design problem, multiple configurations of the structure are considered simultaneously. The objective is to minimize rib weight. Static constraints confine displacements of the cutting tool and the workpiece due to cutting forces and the heat generated by spindle bearings. The dynamic constraint requires the fundamental natural frequency of the structure to be greater than a given value in order to reduce dynamic deflection. Compared with the original design, the improvement resulting from this approach is significant.

  4. A general model for the scaling of offspring size and adult size.

    Science.gov (United States)

    Falster, Daniel S; Moles, Angela T; Westoby, Mark

    2008-09-01

    Understanding evolutionary coordination among different life-history traits is a key challenge for ecology and evolution. Here we develop a general quantitative model predicting how offspring size should scale with adult size by combining a simple model for life-history evolution with a frequency-dependent survivorship model. The key innovation is that larger offspring are afforded three different advantages during ontogeny: higher survivorship per time, a shortened juvenile phase, and advantage during size-competitive growth. In this model, it turns out that size-asymmetric advantage during competition is the factor driving evolution toward larger offspring sizes. For simplified and limiting cases, the model is shown to produce the same predictions as the previously existing theory on which it is founded. The explicit treatment of different survival advantages has biologically important new effects, mainly through an interaction between total maternal investment in reproduction and the duration of competitive growth. This goes on to explain alternative allometries between log offspring size and log adult size, as observed in mammals (slope = 0.95) and plants (slope = 0.54). Further, it suggests how these differences relate quantitatively to specific biological processes during recruitment. In these ways, the model generalizes across previous theory and provides explanations for some differences between major taxa.

  5. Dual linear structured support vector machine tracking method via scale correlation filter

    Science.gov (United States)

    Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen

    2018-01-01

    Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.

  6. Model of large scale man-machine systems with an application to vessel traffic control

    NARCIS (Netherlands)

    Wewerinke, P.H.; van der Ent, W.I.; ten Hove, D.

    1989-01-01

    Mathematical models are discussed to deal with complex large-scale man-machine systems such as vessel (air, road) traffic and process control systems. Only interrelationships between subsystems are assumed. Each subsystem is controlled by a corresponding human operator (HO). Because of the

  7. CT crown for on-machine scale calibration in Computed Tomography

    DEFF Research Database (Denmark)

    Stolfi, Alessandro; De Chiffre, Leonardo

    2016-01-01

    A novel artefact for on-machine calibration of the scale in 3D X-ray Computed Tomography (CT) is presented. The artefact comprises an invar disc on which several reference ruby spheres are positioned at different heights using carbon fibre rods. The artefact is positioned and scanned together...

  8. The scaling of human interactions with city size.

    Science.gov (United States)

    Schläpfer, Markus; Bettencourt, Luís M A; Grauwin, Sébastian; Raschke, Mathias; Claxton, Rob; Smoreda, Zbigniew; West, Geoffrey B; Ratti, Carlo

    2014-09-06

    The size of cities is known to play a fundamental role in social and economic life. Yet, its relation to the structure of the underlying network of human interactions has not been investigated empirically in detail. In this paper, we map society-wide communication networks to the urban areas of two European countries. We show that both the total number of contacts and the total communication activity grow superlinearly with city population size, according to well-defined scaling relations and resulting from a multiplicative increase that affects most citizens. Perhaps surprisingly, however, the probability that an individual's contacts are also connected with each other remains largely unaffected. These empirical results predict a systematic and scale-invariant acceleration of interaction-based spreading phenomena as cities get bigger, which is numerically confirmed by applying epidemiological models to the studied networks. Our findings should provide a microscopic basis towards understanding the superlinear increase of different socioeconomic quantities with city size, that applies to almost all urban systems and includes, for instance, the creation of new inventions or the prevalence of certain contagious diseases. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  9. Design Of A Small-Scale Hulling Machine For Improved Wet-Processed Coffee.

    Directory of Open Access Journals (Sweden)

    Adeleke

    2017-08-01

    Full Text Available The method of primary processing of coffee is a vital determinant of quality and price. Wet processing method produces higher quality beans but is very labourious. This work outlines the design of a small scale cost-effective ergonomic and easily maintained and operated coffee hulling machine that can improve quality and productivity of green coffee beans. The machine can be constructed from locally available materials at a relatively low cost of about NGN 140000.00 with cheap running cost. The beaters are made from rubber strip which can deflect when in contact with any obstruction causing little or no stresses on drum members and reducing the risk of damage to both the beans and machine. The machine is portable and detachable which make it fit to be owned by a group of farmers who can move it from one farm to the other making affordability and running cost easier. The easily affordable and relatively low running cost may be further reduced by the fact that the machine is powered by 3.0 Hp petrol engine which is suitable for other purposes among the rural dwellers. The eventual construction of the machine will encourage more farmers to go into wet processing of coffee and reduce the foreign exchange hitherto lost to this purpose.

  10. Dynamic Modeling and Analysis of the Large-Scale Rotary Machine with Multi-Supporting

    Directory of Open Access Journals (Sweden)

    Xuejun Li

    2011-01-01

    Full Text Available The large-scale rotary machine with multi-supporting, such as rotary kiln and rope laying machine, is the key equipment in the architectural, chemistry, and agriculture industries. The body, rollers, wheels, and bearings constitute a chain multibody system. Axis line deflection is a vital parameter to determine mechanics state of rotary machine, thus body axial vibration needs to be studied for dynamic monitoring and adjusting of rotary machine. By using the Riccati transfer matrix method, the body system of rotary machine is divided into many subsystems composed of three elements, namely, rigid disk, elastic shaft, and linear spring. Multiple wheel-bearing structures are simplified as springs. The transfer matrices of the body system and overall transfer equation are developed, as well as the response overall motion equation. Taken a rotary kiln as an instance, natural frequencies, modal shape, and response vibration with certain exciting axis line deflection are obtained by numerical computing. The body vibration modal curves illustrate the cause of dynamical errors in the common axis line measurement methods. The displacement response can be used for further measurement dynamical error analysis and compensation. The response overall motion equation could be applied to predict the body motion under abnormal mechanics condition, and provide theory guidance for machine failure diagnosis.

  11. Size scaling of negative hydrogen ion sources for fusion

    International Nuclear Information System (INIS)

    Fantz, U.; Franzen, P.; Kraus, W.; Schiesko, L.; Wimmer, C.; Wünderlich, D.

    2015-01-01

    The RF-driven negative hydrogen ion source (H − , D − ) for the international fusion experiment ITER has a width of 0.9 m and a height of 1.9 m and is based on a ⅛ scale prototype source being in operation at the IPP test facilities BATMAN and MANITU for many years. Among the challenges to meet the required parameters in a caesiated source at a source pressure of 0.3 Pa or less is the challenge in size scaling of a factor of eight. As an intermediate step a ½ scale ITER source went into operation at the IPP test facility ELISE with the first plasma in February 2013. The experience and results gained so far at ELISE allowed a size scaling study from the prototype source towards the ITER relevant size at ELISE, in which operational issues, physical aspects and the source performance is addressed, highlighting differences as well as similarities. The most ITER relevant results are: low pressure operation down to 0.2 Pa is possible without problems; the magnetic filter field created by a current in the plasma grid is sufficient to reduce the electron temperature below the target value of 1 eV and to reduce together with the bias applied between the differently shaped bias plate and the plasma grid the amount of co-extracted electrons. An asymmetry of the co-extracted electron currents in the two grid segments is measured, varying strongly with filter field and bias. Contrary to the prototype source, a dedicated plasma drift in vertical direction is not observed. As in the prototype source, the performance in deuterium is limited by the amount of co-extracted electrons in short as well as in long pulse operation. Caesium conditioning is much harder in deuterium than in hydrogen for which fast and reproducible conditioning is achieved. First estimates reveal a caesium consumption comparable to the one in the prototype source despite the large size

  12. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems

    OpenAIRE

    Abadi, Martín; Agarwal, Ashish; Barham, Paul; Brevdo, Eugene; Chen, Zhifeng; Citro, Craig; Corrado, Greg S.; Davis, Andy; Dean, Jeffrey; Devin, Matthieu; Ghemawat, Sanjay; Goodfellow, Ian; Harp, Andrew; Irving, Geoffrey; Isard, Michael

    2016-01-01

    TensorFlow is an interface for expressing machine learning algorithms, and an implementation for executing such algorithms. A computation expressed using TensorFlow can be executed with little or no change on a wide variety of heterogeneous systems, ranging from mobile devices such as phones and tablets up to large-scale distributed systems of hundreds of machines and thousands of computational devices such as GPU cards. The system is flexible and can be used to express a wide variety of algo...

  13. Transient characteristics of current lead losses for the large scale high-temperature superconducting rotating machine

    International Nuclear Information System (INIS)

    Le, T. D.; Kim, J. H.; Park, S. I.; Kim, D. J.; Kim, H. M.; Lee, H. G.; Yoon, Y. S.; Jo, Y. S.; Yoon, K. Y.

    2014-01-01

    To minimize most heat loss of current lead for high-temperature superconducting (HTS) rotating machine, the choice of conductor properties and lead geometry - such as length, cross section, and cooling surface area - are one of the various significant factors must be selected. Therefore, an optimal lead for large scale of HTS rotating machine has presented before. Not let up with these trends, this paper continues to improve of diminishing heat loss for HTS part according to different model. It also determines the simplification conditions for an evaluation of the main flux flow loss and eddy current loss transient characteristics during charging and discharging period.

  14. Sizing and scaling requirements of a large-scale physical model for code validation

    International Nuclear Information System (INIS)

    Khaleel, R.; Legore, T.

    1990-01-01

    Model validation is an important consideration in application of a code for performance assessment and therefore in assessing the long-term behavior of the engineered and natural barriers of a geologic repository. Scaling considerations relevant to porous media flow are reviewed. An analysis approach is presented for determining the sizing requirements of a large-scale, hydrology physical model. The physical model will be used to validate performance assessment codes that evaluate the long-term behavior of the repository isolation system. Numerical simulation results for sizing requirements are presented for a porous medium model in which the media properties are spatially uncorrelated

  15. Towards modeling intergranular stress corrosion cracks on grain size scales

    International Nuclear Information System (INIS)

    Simonovski, Igor; Cizelj, Leon

    2012-01-01

    Highlights: ► Simulating the onset and propagation of intergranular cracking. ► Model based on the as-measured geometry and crystallographic orientations. ► Feasibility, performance of the proposed computational approach demonstrated. - Abstract: Development of advanced models at the grain size scales has so far been mostly limited to simulated geometry structures such as for example 3D Voronoi tessellations. The difficulty came from a lack of non-destructive techniques for measuring the microstructures. In this work a novel grain-size scale approach for modelling intergranular stress corrosion cracking based on as-measured 3D grain structure of a 400 μm stainless steel wire is presented. Grain topologies and crystallographic orientations are obtained using a diffraction contrast tomography, reconstructed within a detailed finite element model and coupled with advanced constitutive models for grains and grain boundaries. The wire is composed of 362 grains and over 1600 grain boundaries. Grain boundary damage initialization and early development is then explored for a number of cases, ranging from isotropic elasticity up to crystal plasticity constitutive laws for the bulk grain material. In all cases the grain boundaries are modeled using the cohesive zone approach. The feasibility of the approach is explored.

  16. Asymmetric fluid criticality. II. Finite-size scaling for simulations.

    Science.gov (United States)

    Kim, Young C; Fisher, Michael E

    2003-10-01

    The vapor-liquid critical behavior of intrinsically asymmetric fluids is studied in finite systems of linear dimensions L focusing on periodic boundary conditions, as appropriate for simulations. The recently propounded "complete" thermodynamic (L--> infinity) scaling theory incorporating pressure mixing in the scaling fields as well as corrections to scaling [Phys. Rev. E 67, 061506 (2003)] is extended to finite L, initially in a grand canonical representation. The theory allows for a Yang-Yang anomaly in which, when L--> infinity, the second temperature derivative (d2musigma/dT2) of the chemical potential along the phase boundary musigmaT diverges when T-->Tc-. The finite-size behavior of various special critical loci in the temperature-density or (T,rho) plane, in particular, the k-inflection susceptibility loci and the Q-maximal loci--derived from QL(T,L) is identical with 2L/L where m is identical with rho-L--is carefully elucidated and shown to be of value in estimating Tc and rhoc. Concrete illustrations are presented for the hard-core square-well fluid and for the restricted primitive model electrolyte including an estimate of the correlation exponent nu that confirms Ising-type character. The treatment is extended to the canonical representation where further complications appear.

  17. Variability of the raindrop size distribution at small spatial scales

    Science.gov (United States)

    Berne, A.; Jaffrain, J.

    2010-12-01

    Because of the interactions between atmospheric turbulence and cloud microphysics, the raindrop size distribution (DSD) is strongly variable in space and time. The spatial variability of the DSD at small spatial scales (below a few km) is not well documented and not well understood, mainly because of a lack of adequate measurements at the appropriate resolutions. A network of 16 disdrometers (Parsivels) has been designed and set up over EPFL campus in Lausanne, Switzerland. This network covers a typical operational weather radar pixel of 1x1 km2. The question of the significance of the variability of the DSD at such small scales is relevant for radar remote sensing of rainfall because the DSD is often assumed to be uniform within a radar sample volume and because the Z-R relationships used to convert the measured radar reflectivity Z into rain rate R are usually derived from point measurements. Thanks to the number of disdrometers, it was possible to quantify the spatial variability of the DSD at the radar pixel scale and to show that it can be significant. In this contribution, we show that the variability of the total drop concentration, of the median volume diameter and of the rain rate are significant, taking into account the sampling uncertainty associated with disdrometer measurements. The influence of this variability on the Z-R relationship can be non-negligible. Finally, the spatial structure of the DSD is quantified using a geostatistical tool, the variogram, and indicates high spatial correlation within a radar pixel.

  18. Marine snow microbial communities: scaling of abundances with aggregate size

    DEFF Research Database (Denmark)

    Kiørboe, Thomas

    2003-01-01

    Marine aggregates are inhabited by diverse microbial communities, and the concentration of attached microbes typically exceeds concentrations in the ambient water by orders of magnitude. An extension of the classical Lotka-Volterra model, which includes 3 trophic levels (bacteria, flagellates...... are controlled by flagellate grazing, while flagellate and ciliate populations are governed by colonization and detachment. The model also suggests that microbial populations are turned over rapidly (1 to 20 times d-1) due to continued colonization and detachment. The model overpredicts somewhat the scaling...... of microbial abundances with aggregate size observed in field-collected aggregates. This may be because it disregards the aggregation/disaggregation dynamics of aggregates, as well as interspecific interactions between bacteria....

  19. Latent hardening size effect in small-scale plasticity

    Science.gov (United States)

    Bardella, Lorenzo; Segurado, Javier; Panteghini, Andrea; Llorca, Javier

    2013-07-01

    We aim at understanding the multislip behaviour of metals subject to irreversible deformations at small-scales. By focusing on the simple shear of a constrained single-crystal strip, we show that discrete Dislocation Dynamics (DD) simulations predict a strong latent hardening size effect, with smaller being stronger in the range [1.5 µm, 6 µm] for the strip height. We attempt to represent the DD pseudo-experimental results by developing a flow theory of Strain Gradient Crystal Plasticity (SGCP), involving both energetic and dissipative higher-order terms and, as a main novelty, a strain gradient extension of the conventional latent hardening. In order to discuss the capability of the SGCP theory proposed, we implement it into a Finite Element (FE) code and set its material parameters on the basis of the DD results. The SGCP FE code is specifically developed for the boundary value problem under study so that we can implement a fully implicit (Backward Euler) consistent algorithm. Special emphasis is placed on the discussion of the role of the material length scales involved in the SGCP model, from both the mechanical and numerical points of view.

  20. Latent hardening size effect in small-scale plasticity

    International Nuclear Information System (INIS)

    Bardella, Lorenzo; Panteghini, Andrea; Segurado, Javier; Llorca, Javier

    2013-01-01

    We aim at understanding the multislip behaviour of metals subject to irreversible deformations at small-scales. By focusing on the simple shear of a constrained single-crystal strip, we show that discrete Dislocation Dynamics (DD) simulations predict a strong latent hardening size effect, with smaller being stronger in the range [1.5 µm, 6 µm] for the strip height. We attempt to represent the DD pseudo-experimental results by developing a flow theory of Strain Gradient Crystal Plasticity (SGCP), involving both energetic and dissipative higher-order terms and, as a main novelty, a strain gradient extension of the conventional latent hardening. In order to discuss the capability of the SGCP theory proposed, we implement it into a Finite Element (FE) code and set its material parameters on the basis of the DD results. The SGCP FE code is specifically developed for the boundary value problem under study so that we can implement a fully implicit (Backward Euler) consistent algorithm. Special emphasis is placed on the discussion of the role of the material length scales involved in the SGCP model, from both the mechanical and numerical points of view. (paper)

  1. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    Science.gov (United States)

    Dednam, W.; Botha, A. E.

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution

  2. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    International Nuclear Information System (INIS)

    Dednam, W; Botha, A E

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution

  3. Constant size descriptors for accurate machine learning models of molecular properties

    Science.gov (United States)

    Collins, Christopher R.; Gordon, Geoffrey J.; von Lilienfeld, O. Anatole; Yaron, David J.

    2018-06-01

    Two different classes of molecular representations for use in machine learning of thermodynamic and electronic properties are studied. The representations are evaluated by monitoring the performance of linear and kernel ridge regression models on well-studied data sets of small organic molecules. One class of representations studied here counts the occurrence of bonding patterns in the molecule. These require only the connectivity of atoms in the molecule as may be obtained from a line diagram or a SMILES string. The second class utilizes the three-dimensional structure of the molecule. These include the Coulomb matrix and Bag of Bonds, which list the inter-atomic distances present in the molecule, and Encoded Bonds, which encode such lists into a feature vector whose length is independent of molecular size. Encoded Bonds' features introduced here have the advantage of leading to models that may be trained on smaller molecules and then used successfully on larger molecules. A wide range of feature sets are constructed by selecting, at each rank, either a graph or geometry-based feature. Here, rank refers to the number of atoms involved in the feature, e.g., atom counts are rank 1, while Encoded Bonds are rank 2. For atomization energies in the QM7 data set, the best graph-based feature set gives a mean absolute error of 3.4 kcal/mol. Inclusion of 3D geometry substantially enhances the performance, with Encoded Bonds giving 2.4 kcal/mol, when used alone, and 1.19 kcal/mol, when combined with graph features.

  4. Accelerating Relevance Vector Machine for Large-Scale Data on Spark

    Directory of Open Access Journals (Sweden)

    Liu Fang

    2017-01-01

    Full Text Available Relevance vector machine (RVM is a machine learning algorithm based on a sparse Bayesian framework, which performs well when running classification and regression tasks on small-scale datasets. However, RVM also has certain drawbacks which restricts its practical applications such as (1 slow training process, (2 poor performance on training large-scale datasets. In order to solve these problem, we propose Discrete AdaBoost RVM (DAB-RVM which incorporate ensemble learning in RVM at first. This method performs well with large-scale low-dimensional datasets. However, as the number of features increases, the training time of DAB-RVM increases as well. To avoid this phenomenon, we utilize the sufficient training samples of large-scale datasets and propose all features boosting RVM (AFB-RVM, which modifies the way of obtaining weak classifiers. In our experiments we study the differences between various boosting techniques with RVM, demonstrating the performance of the proposed approaches on Spark. As a result of this paper, two proposed approaches on Spark for different types of large-scale datasets are available.

  5. A study of energy-size relationship and wear rate in a lab-scale high pressure grinding rolls unit

    Science.gov (United States)

    Rashidi Dashtbayaz, Samira

    This study is focused on two independent topics of energy-size relationship and wear-rate measurements on a lab-scale high pressure grinding rolls (HPGR). The first part of this study has been aimed to investigate the influence of the operating parameters and the feed characteristics on the particle-bed breakage using four different ore samples in a 200 mm x 100 mm lab-scale HPGR. Additionally, multistage grinding, scale-up from a lab-scale HPGR, and prediction of the particle size distributions have been studied in detail. The results obtained from energy-size relationship studies help with better understanding of the factors contributing to more energy-efficient grinding. It will be shown that the energy efficiency of the two configurations of locked-cycle and open multipass is completely dependent on the ore properties. A test procedure to produce the scale-up data is presented. The comparison of the scale-up factors between the data obtained on the University of Utah lab-scale HPGR and the industrial machine at the Newmont Boddington plant confirmed the applicability of lab-scale machines for trade-off studies. The population balance model for the simulation of product size distributions has shown to work well with the breakage function estimated through tests performed on the HPGR at high rotational speed. Selection function has been estimated by back calculation of population balance model with the help of the experimental data. This is considered to be a major step towards advancing current research on the simulation of particle size distribution by using the HPGR machine for determining the breakage function. Developing a technique/setup to measure the wear rate of the HPGR rolls' surface is the objective of the second topic of this dissertation. A mockup was initially designed to assess the application of the linear displacement sensors for measuring the rolls' weight loss. Upon the analysis of that technique and considering the corresponding sources of

  6. Scaling up liquid state machines to predict over address events from dynamic vision sensors.

    Science.gov (United States)

    Kaiser, Jacques; Stal, Rainer; Subramoney, Anand; Roennau, Arne; Dillmann, Rüdiger

    2017-09-01

    Short-term visual prediction is important both in biology and robotics. It allows us to anticipate upcoming states of the environment and therefore plan more efficiently. In theoretical neuroscience, liquid state machines have been proposed as a biologically inspired method to perform asynchronous prediction without a model. However, they have so far only been demonstrated in simulation or small scale pre-processed camera images. In this paper, we use a liquid state machine to predict over the whole  [Formula: see text]  event stream provided by a real dynamic vision sensor (DVS, or silicon retina). Thanks to the event-based nature of the DVS, the liquid is constantly fed with data when an object is in motion, fully embracing the asynchronicity of spiking neural networks. We propose a smooth continuous representation of the event stream for the short-term visual prediction task. Moreover, compared to previous works (2002 Neural Comput. 2525 282-93 and Burgsteiner H et al 2007 Appl. Intell. 26 99-109), we scale the input dimensionality that the liquid operates on by two order of magnitudes. We also expose the current limits of our method by running experiments in a challenging environment where multiple objects are in motion. This paper is a step towards integrating biologically inspired algorithms derived in theoretical neuroscience to real world robotic setups. We believe that liquid state machines could complement current prediction algorithms used in robotics, especially when dealing with asynchronous sensors.

  7. Finite size scaling analysis of disordered electron systems

    International Nuclear Information System (INIS)

    Markos, P.

    2012-01-01

    We demonstrated the application of the finite size scaling method to the analysis of the transition of the disordered system from the metallic to the insulating regime. The method enables us to calculate the critical point and the critical exponent which determines the divergence of the correlation length in the vicinity of the critical point. The universality of the metal-insulator transition was verified by numerical analysis of various physical parameters and the critical exponent was calculated with high accuracy for different disordered models. Numerically obtained value of the critical exponent for the three dimensional disordered model (1) has been recently supported by the semi-analytical work and verified by experimental optical measurements equivalent to the three dimensional disordered model (1). Another unsolved problem of the localization is the disagreement between numerical results and predictions of the analytical theories. At present, no analytical theory confirms numerically obtained values of critical exponents. The reason for this disagreement lies in the statistical character of the process of localization. The theory must consider all possible scattering processes on randomly distributed impurities. All physical variables are statistical quantities with broad probability distributions. It is in general not know how to calculate analytically their mean values. We believe that detailed numerical analysis of various disordered systems bring inspiration for the formulation of analytical theory. (authors)

  8. Hunting for Hydrothermal Vents at the Local-Scale Using AUV's and Machine-Learning Classification in the Earth's Oceans

    Science.gov (United States)

    White, S. M.

    2018-05-01

    New AUV-based mapping technology coupled with machine-learning methods for detecting individual vents and vent fields at the local-scale raise the possibility of understanding the geologic controls on hydrothermal venting.

  9. Flow Characteristics and Sizing of Annular Seat Valves for Digital Displacement Machines

    DEFF Research Database (Denmark)

    Nørgård, Christian; Bech, Michael Møller; Andersen, Torben O.

    2018-01-01

    operating range. To achieve high machine efficiency, the valve flow losses and the required electrical power needed for valve switching should be low. The annular valve plunger geometry, of a valve prototype developed for digital displacement machines, is parametrized by three parameters: stroke length......This paper investigates the steady-state flow characteristics and power losses of annular seat valves for digital displacement machines. Annular seat valves are promising candidates for active check-valves used in digital displacement fluid power machinery which excels in efficiency in a broad...... a valve prototype. Using the simulated maps to estimate the flow power losses and a simple generic model to estimate the electric power losses, both during digital displacement operation, optimal designs of annular seat valves, with respect to valve power losses, are derived under several different...

  10. Multi products single machine economic production quantity model with multiple batch size

    Directory of Open Access Journals (Sweden)

    Ata Allah Taleizadeh

    2011-04-01

    Full Text Available In this paper, a multi products single machine economic order quantity model with discrete delivery is developed. A unique cycle length is considered for all produced items with an assumption that all products are manufactured on a single machine with a limited capacity. The proposed model considers different items such as production, setup, holding, and transportation costs. The resulted model is formulated as a mixed integer nonlinear programming model. Harmony search algorithm, extended cutting plane and particle swarm optimization methods are used to solve the proposed model. Two numerical examples are used to analyze and to evaluate the performance of the proposed model.

  11. A Machine Learning Approach to Estimate Riverbank Geotechnical Parameters from Sediment Particle Size Data

    Science.gov (United States)

    Iwashita, Fabio; Brooks, Andrew; Spencer, John; Borombovits, Daniel; Curwen, Graeme; Olley, Jon

    2015-04-01

    Assessing bank stability using geotechnical models traditionally involves the laborious collection of data on the bank and floodplain stratigraphy, as well as in-situ geotechnical data for each sedimentary unit within a river bank. The application of geotechnical bank stability models are limited to those sites where extensive field data has been collected, where their ability to provide predictions of bank erosion at the reach scale are limited without a very extensive and expensive field data collection program. Some challenges in the construction and application of riverbank erosion and hydraulic numerical models are their one-dimensionality, steady-state requirements, lack of calibration data, and nonuniqueness. Also, numerical models commonly can be too rigid with respect to detecting unexpected features like the onset of trends, non-linear relations, or patterns restricted to sub-samples of a data set. These shortcomings create the need for an alternate modelling approach capable of using available data. The application of the Self-Organizing Maps (SOM) approach is well-suited to the analysis of noisy, sparse, nonlinear, multidimensional, and scale-dependent data. It is a type of unsupervised artificial neural network with hybrid competitive-cooperative learning. In this work we present a method that uses a database of geotechnical data collected at over 100 sites throughout Queensland State, Australia, to develop a modelling approach that enables geotechnical parameters (soil effective cohesion, friction angle, soil erodibility and critical stress) to be derived from sediment particle size data (PSD). The model framework and predicted values were evaluated using two methods, splitting the dataset into training and validation set, and through a Bootstrap approach. The basis of Bootstrap cross-validation is a leave-one-out strategy. This requires leaving one data value out of the training set while creating a new SOM to estimate that missing value based on the

  12. Vertebral scale system to measure heart size in thoracic radiographs ...

    African Journals Online (AJOL)

    In veterinary diagnostic radiology, determination of heart size is necessary in the assessment of patients with clinical signs of cardiac anomaly. In this study, heart sizes were compared with lengths of mid-thoracic vertebrae in 12 clinically normal West African Dwarf Goats (WADGs) (8 females, 4 males). The aim of the ...

  13. Decreased attention to object size information in scale errors performers

    NARCIS (Netherlands)

    Grzyb, B.J.; Cangelosi, A.; Cattani, A.; Floccia, C.

    2017-01-01

    Young children sometimes make serious attempts to perform impossible actions on miniature objects as if they were full-size objects. The existing explanations of these curious action errors assume (but never explicitly tested) children’s decreased attention to object size information. This study

  14. Flow Characteristics and Sizing of Annular Seat Valves for Digital Displacement Machines

    Directory of Open Access Journals (Sweden)

    Christian Nørgård

    2018-01-01

    Full Text Available This paper investigates the steady-state flow characteristics and power losses of annular seat valves for digital displacement machines. Annular seat valves are promising candidates for active check-valves used in digital displacement fluid power machinery which excels in efficiency in a broad operating range. To achieve high machine efficiency, the valve flow losses and the required electrical power needed for valve switching should be low. The annular valve plunger geometry, of a valve prototype developed for digital displacement machines, is parametrized by three parameters: stroke length, seat radius and seat width. The steady-state flow characteristics are analyzed using static axi-symmetric computational fluid dynamics. The pressure drops and flow forces are mapped in the valve design space for several different flow rates. The simulated results are compared against measurements using a valve prototype. Using the simulated maps to estimate the flow power losses and a simple generic model to estimate the electric power losses, both during digital displacement operation, optimal designs of annular seat valves, with respect to valve power losses, are derived under several different operating conditions.

  15. Spatial patterns of correlated scale size and scale color in relation to color pattern elements in butterfly wings.

    Science.gov (United States)

    Iwata, Masaki; Otaki, Joji M

    2016-02-01

    Complex butterfly wing color patterns are coordinated throughout a wing by unknown mechanisms that provide undifferentiated immature scale cells with positional information for scale color. Because there is a reasonable level of correspondence between the color pattern element and scale size at least in Junonia orithya and Junonia oenone, a single morphogenic signal may contain positional information for both color and size. However, this color-size relationship has not been demonstrated in other species of the family Nymphalidae. Here, we investigated the distribution patterns of scale size in relation to color pattern elements on the hindwings of the peacock pansy butterfly Junonia almana, together with other nymphalid butterflies, Vanessa indica and Danaus chrysippus. In these species, we observed a general decrease in scale size from the basal to the distal areas, although the size gradient was small in D. chrysippus. Scales of dark color in color pattern elements, including eyespot black rings, parafocal elements, and submarginal bands, were larger than those of their surroundings. Within an eyespot, the largest scales were found at the focal white area, although there were exceptional cases. Similarly, ectopic eyespots that were induced by physical damage on the J. almana background area had larger scales than in the surrounding area. These results are consistent with the previous finding that scale color and size coordinate to form color pattern elements. We propose a ploidy hypothesis to explain the color-size relationship in which the putative morphogenic signal induces the polyploidization (genome amplification) of immature scale cells and that the degrees of ploidy (gene dosage) determine scale color and scale size simultaneously in butterfly wings. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Large-scale ligand-based predictive modelling using support vector machines.

    Science.gov (United States)

    Alvarsson, Jonathan; Lampa, Samuel; Schaal, Wesley; Andersson, Claes; Wikberg, Jarl E S; Spjuth, Ola

    2016-01-01

    The increasing size of datasets in drug discovery makes it challenging to build robust and accurate predictive models within a reasonable amount of time. In order to investigate the effect of dataset sizes on predictive performance and modelling time, ligand-based regression models were trained on open datasets of varying sizes of up to 1.2 million chemical structures. For modelling, two implementations of support vector machines (SVM) were used. Chemical structures were described by the signatures molecular descriptor. Results showed that for the larger datasets, the LIBLINEAR SVM implementation performed on par with the well-established libsvm with a radial basis function kernel, but with dramatically less time for model building even on modest computer resources. Using a non-linear kernel proved to be infeasible for large data sizes, even with substantial computational resources on a computer cluster. To deploy the resulting models, we extended the Bioclipse decision support framework to support models from LIBLINEAR and made our models of logD and solubility available from within Bioclipse.

  17. Some technical constraints on possible Tokamak machines from next generation to reactor size

    International Nuclear Information System (INIS)

    Knobloch, A.

    1975-11-01

    A simplified consistent scaling of possible Tokamak reactors is set up in the power range of 0.1 - 10 GW. The influence of some important parameters on the scaling is shown and the role of some technical constraints is discussed. The scaling is evaluated for the two cases of a circular and a strongly elongated plasma section. (orig.) [de

  18. Inverse size scaling of the nucleolus by a concentration-dependent phase transition.

    Science.gov (United States)

    Weber, Stephanie C; Brangwynne, Clifford P

    2015-03-02

    Just as organ size typically increases with body size, the size of intracellular structures changes as cells grow and divide. Indeed, many organelles, such as the nucleus [1, 2], mitochondria [3], mitotic spindle [4, 5], and centrosome [6], exhibit size scaling, a phenomenon in which organelle size depends linearly on cell size. However, the mechanisms of organelle size scaling remain unclear. Here, we show that the size of the nucleolus, a membraneless organelle important for cell-size homeostasis [7], is coupled to cell size by an intracellular phase transition. We find that nucleolar size directly scales with cell size in early C. elegans embryos. Surprisingly, however, when embryo size is altered, we observe inverse scaling: nucleolar size increases in small cells and decreases in large cells. We demonstrate that this seemingly contradictory result arises from maternal loading of a fixed number rather than a fixed concentration of nucleolar components, which condense into nucleoli only above a threshold concentration. Our results suggest that the physics of phase transitions can dictate whether an organelle assembles, and, if so, its size, providing a mechanistic link between organelle assembly and cell size. Since the nucleolus is known to play a key role in cell growth, this biophysical readout of cell size could provide a novel feedback mechanism for growth control. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Energetics, scaling and sexual size dimorphism of spiders.

    Science.gov (United States)

    Grossi, B; Canals, M

    2015-03-01

    The extreme sexual size dimorphism in spiders has motivated studies for many years. In many species the male can be very small relative to the female. There are several hypotheses trying to explain this fact, most of them emphasizing the role of energy in determining spider size. The aim of this paper is to review the role of energy in sexual size dimorphism of spiders, even for those spiders that do not necessarily live in high foliage, using physical and allometric principles. Here we propose that the cost of transport or equivalently energy expenditure and the speed are traits under selection pressure in male spiders, favoring those of smaller size to reduce travel costs. The morphology of the spiders responds to these selective forces depending upon the lifestyle of the spiders. Climbing and bridging spiders must overcome the force of gravity. If bridging allows faster dispersal, small males would have a selective advantage by enjoying more mating opportunities. In wandering spiders with low population density and as a consequence few male-male interactions, high speed and low energy expenditure or cost of transport should be favored by natural selection. Pendulum mechanics show the advantages of long legs in spiders and their relationship with high speed, even in climbing and bridging spiders. Thus small size, compensated by long legs should be the expected morphology for a fast and mobile male spider.

  20. Large-scale machine learning and evaluation platform for real-time traffic surveillance

    Science.gov (United States)

    Eichel, Justin A.; Mishra, Akshaya; Miller, Nicholas; Jankovic, Nicholas; Thomas, Mohan A.; Abbott, Tyler; Swanson, Douglas; Keller, Joel

    2016-09-01

    In traffic engineering, vehicle detectors are trained on limited datasets, resulting in poor accuracy when deployed in real-world surveillance applications. Annotating large-scale high-quality datasets is challenging. Typically, these datasets have limited diversity; they do not reflect the real-world operating environment. There is a need for a large-scale, cloud-based positive and negative mining process and a large-scale learning and evaluation system for the application of automatic traffic measurements and classification. The proposed positive and negative mining process addresses the quality of crowd sourced ground truth data through machine learning review and human feedback mechanisms. The proposed learning and evaluation system uses a distributed cloud computing framework to handle data-scaling issues associated with large numbers of samples and a high-dimensional feature space. The system is trained using AdaBoost on 1,000,000 Haar-like features extracted from 70,000 annotated video frames. The trained real-time vehicle detector achieves an accuracy of at least 95% for 1/2 and about 78% for 19/20 of the time when tested on ˜7,500,000 video frames. At the end of 2016, the dataset is expected to have over 1 billion annotated video frames.

  1. A Fast SVD-Hidden-nodes based Extreme Learning Machine for Large-Scale Data Analytics.

    Science.gov (United States)

    Deng, Wan-Yu; Bai, Zuo; Huang, Guang-Bin; Zheng, Qing-Hua

    2016-05-01

    Big dimensional data is a growing trend that is emerging in many real world contexts, extending from web mining, gene expression analysis, protein-protein interaction to high-frequency financial data. Nowadays, there is a growing consensus that the increasing dimensionality poses impeding effects on the performances of classifiers, which is termed as the "peaking phenomenon" in the field of machine intelligence. To address the issue, dimensionality reduction is commonly employed as a preprocessing step on the Big dimensional data before building the classifiers. In this paper, we propose an Extreme Learning Machine (ELM) approach for large-scale data analytic. In contrast to existing approaches, we embed hidden nodes that are designed using singular value decomposition (SVD) into the classical ELM. These SVD nodes in the hidden layer are shown to capture the underlying characteristics of the Big dimensional data well, exhibiting excellent generalization performances. The drawback of using SVD on the entire dataset, however, is the high computational complexity involved. To address this, a fast divide and conquer approximation scheme is introduced to maintain computational tractability on high volume data. The resultant algorithm proposed is labeled here as Fast Singular Value Decomposition-Hidden-nodes based Extreme Learning Machine or FSVD-H-ELM in short. In FSVD-H-ELM, instead of identifying the SVD hidden nodes directly from the entire dataset, SVD hidden nodes are derived from multiple random subsets of data sampled from the original dataset. Comprehensive experiments and comparisons are conducted to assess the FSVD-H-ELM against other state-of-the-art algorithms. The results obtained demonstrated the superior generalization performance and efficiency of the FSVD-H-ELM. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Avalanche size scaling in sheared three-dimensional amorphous solid

    DEFF Research Database (Denmark)

    Bailey, Nicholas; Schiøtz, Jakob; Lemaître, A.

    2007-01-01

    We study the statistics of plastic rearrangement events in a simulated amorphous solid at T=0. Events are characterized by the energy release and the "slip volume", the product of plastic strain and system volume. Their distributions for a given system size L appear to be exponential......, but a characteristic event size cannot be inferred, because the mean values of these quantities increase as L-alpha with alpha similar to 3/2. In contrast with results obtained in 2D models, we do not see simply connected avalanches. The exponent suggests a fractal shape of the avalanches, which is also evidenced...

  3. Deconfinement phase transition and finite-size scaling in SU(2) lattice gauge theory

    International Nuclear Information System (INIS)

    Mogilevskij, O.A.

    1988-01-01

    Calculation technique for deconfinement phase transition parameters based on application of finite-size scaling theory is suggested. The essence of the technique lies in plotting of universal scaling function on the basis of numerical data obtained at different-size final lattices and discrimination of phase transition parameters for infinite lattice system. Finite-size scaling technique was developed as applied to spin system theory. β critical index for Polyakov loop and SU(2) deconfinement temperature of lattice gauge theory are calculated on the basis of finite-size scaling technique. The obtained value agrees with critical index of magnetization in Ising three-dimensional model

  4. Fault Diagnosis for Distribution Networks Using Enhanced Support Vector Machine Classifier with Classical Multidimensional Scaling

    Directory of Open Access Journals (Sweden)

    Ming-Yuan Cho

    2017-09-01

    Full Text Available In this paper, a new fault diagnosis techniques based on time domain reflectometry (TDR method with pseudo-random binary sequence (PRBS stimulus and support vector machine (SVM classifier has been investigated to recognize the different types of fault in the radial distribution feeders. This novel technique has considered the amplitude of reflected signals and the peaks of cross-correlation (CCR between the reflected and incident wave for generating fault current dataset for SVM. Furthermore, this multi-layer enhanced SVM classifier is combined with classical multidimensional scaling (CMDS feature extraction algorithm and kernel parameter optimization to increase training speed and improve overall classification accuracy. The proposed technique has been tested on a radial distribution feeder to identify ten different types of fault considering 12 input features generated by using Simulink software and MATLAB Toolbox. The success rate of SVM classifier is over 95% which demonstrates the effectiveness and the high accuracy of proposed method.

  5. Peter J Derrick and the Grand Scale 'Magnificent Mass Machine' mass spectrometer at Warwick.

    Science.gov (United States)

    Colburn, A W; Derrick, Peter J; Bowen, Richard D

    2017-12-01

    The value of the Grand Scale 'Magnificent Mass Machine' mass spectrometer in investigating the reactivity of ions in the gas phase is illustrated by a brief analysis of previously unpublished work on metastable ionised n-pentyl methyl ether, which loses predominantly methanol and an ethyl radical, with very minor contributions for elimination of ethane and water. Expulsion of an ethyl radical is interpreted in terms of isomerisation to ionised 3-pentyl methyl ether, via distonic ions and, possibly, an ion-neutral complex comprising ionised ethylcyclopropane and methanol. This explanation is consistent with the closely similar behaviour of the labelled analogues, C 3 H 7 CH 2 CD 2 OCH 3 +. and C 3 H 7 CD 2 CH 2 OCH 3 +. , and is supported by the greater kinetic energy release associated with loss of ethane from ionised n-propyl methyl ether compared to that starting from directly generated ionised 3-pentyl methyl ether.

  6. Zonal Flow Dynamics and Size-scaling of Anomalous Transport

    International Nuclear Information System (INIS)

    Liu Chen; White, Roscoe B.; Zonca, F.

    2003-01-01

    Nonlinear equations for the slow space-time evolution of the radial drift wave envelope and zonal flow amplitude have been self-consistently derived for a model nonuniform tokamak equilibrium within the coherent 4-wave drift wave-zonal flow modulation interaction model of Chen, Lin, and White [Phys. Plasmas 7 (2000) 3129]. Solutions clearly demonstrate turbulence spreading due to nonlinearly enhanced dispersiveness and, consequently, the device-size dependence of the saturated wave intensities and transport coefficients

  7. Classification of Suicide Attempts through a Machine Learning Algorithm Based on Multiple Systemic Psychiatric Scales

    Directory of Open Access Journals (Sweden)

    Jihoon Oh

    2017-09-01

    Full Text Available Classification and prediction of suicide attempts in high-risk groups is important for preventing suicide. The purpose of this study was to investigate whether the information from multiple clinical scales has classification power for identifying actual suicide attempts. Patients with depression and anxiety disorders (N = 573 were included, and each participant completed 31 self-report psychiatric scales and questionnaires about their history of suicide attempts. We then trained an artificial neural network classifier with 41 variables (31 psychiatric scales and 10 sociodemographic elements and ranked the contribution of each variable for the classification of suicide attempts. To evaluate the clinical applicability of our model, we measured classification performance with top-ranked predictors. Our model had an overall accuracy of 93.7% in 1-month, 90.8% in 1-year, and 87.4% in lifetime suicide attempts detection. The area under the receiver operating characteristic curve (AUROC was the highest for 1-month suicide attempts detection (0.93, followed by lifetime (0.89, and 1-year detection (0.87. Among all variables, the Emotion Regulation Questionnaire had the highest contribution, and the positive and negative characteristics of the scales similarly contributed to classification performance. Performance on suicide attempts classification was largely maintained when we only used the top five ranked variables for training (AUROC; 1-month, 0.75, 1-year, 0.85, lifetime suicide attempts detection, 0.87. Our findings indicate that information from self-report clinical scales can be useful for the classification of suicide attempts. Based on the reliable performance of the top five predictors alone, this machine learning approach could help clinicians identify high-risk patients in clinical settings.

  8. Classification of Suicide Attempts through a Machine Learning Algorithm Based on Multiple Systemic Psychiatric Scales.

    Science.gov (United States)

    Oh, Jihoon; Yun, Kyongsik; Hwang, Ji-Hyun; Chae, Jeong-Ho

    2017-01-01

    Classification and prediction of suicide attempts in high-risk groups is important for preventing suicide. The purpose of this study was to investigate whether the information from multiple clinical scales has classification power for identifying actual suicide attempts. Patients with depression and anxiety disorders ( N  = 573) were included, and each participant completed 31 self-report psychiatric scales and questionnaires about their history of suicide attempts. We then trained an artificial neural network classifier with 41 variables (31 psychiatric scales and 10 sociodemographic elements) and ranked the contribution of each variable for the classification of suicide attempts. To evaluate the clinical applicability of our model, we measured classification performance with top-ranked predictors. Our model had an overall accuracy of 93.7% in 1-month, 90.8% in 1-year, and 87.4% in lifetime suicide attempts detection. The area under the receiver operating characteristic curve (AUROC) was the highest for 1-month suicide attempts detection (0.93), followed by lifetime (0.89), and 1-year detection (0.87). Among all variables, the Emotion Regulation Questionnaire had the highest contribution, and the positive and negative characteristics of the scales similarly contributed to classification performance. Performance on suicide attempts classification was largely maintained when we only used the top five ranked variables for training (AUROC; 1-month, 0.75, 1-year, 0.85, lifetime suicide attempts detection, 0.87). Our findings indicate that information from self-report clinical scales can be useful for the classification of suicide attempts. Based on the reliable performance of the top five predictors alone, this machine learning approach could help clinicians identify high-risk patients in clinical settings.

  9. Mechanical properties of micro-sized copper bending beams machined by the focused ion beam technique

    International Nuclear Information System (INIS)

    Motz, C.; Schoeberl, T.; Pippan, R.

    2005-01-01

    Micro-sized bending beams with thicknesses, t, from 7.5 down to 1.0 μm were fabricated with the focused ion beam technique from a copper single crystal with an {1 1 1} orientation. The beams were loaded with a nano-indenter and the force vs. displacement curves were recorded. A strong size effect was found where the flow stress reaches almost 1 GPa for the thinnest beams. A common strain gradient plasticity approach was used to explain the size effect. However, the strong t -1.14 dependence of the flow stress could not be explained by this model. Additionally, the combination of two other dislocation mechanisms is discussed: the limitation of available dislocation sources and a dislocation pile-up at the beam centre. The contribution of the pile-up stress to the flow stress gives a t -1 dependence, which is in good agreement with the experimental results

  10. Automatic detection of ischemic stroke based on scaling exponent electroencephalogram using extreme learning machine

    Science.gov (United States)

    Adhi, H. A.; Wijaya, S. K.; Prawito; Badri, C.; Rezal, M.

    2017-03-01

    Stroke is one of cerebrovascular diseases caused by the obstruction of blood flow to the brain. Stroke becomes the leading cause of death in Indonesia and the second in the world. Stroke also causes of the disability. Ischemic stroke accounts for most of all stroke cases. Obstruction of blood flow can cause tissue damage which results the electrical changes in the brain that can be observed through the electroencephalogram (EEG). In this study, we presented the results of automatic detection of ischemic stroke and normal subjects based on the scaling exponent EEG obtained through detrended fluctuation analysis (DFA) using extreme learning machine (ELM) as the classifier. The signal processing was performed with 18 channels of EEG in the range of 0-30 Hz. Scaling exponents of the subjects were used as the input for ELM to classify the ischemic stroke. The performance of detection was observed by the value of accuracy, sensitivity and specificity. The result showed, performance of the proposed method to classify the ischemic stroke was 84 % for accuracy, 82 % for sensitivity and 87 % for specificity with 120 hidden neurons and sine as the activation function of ELM.

  11. Gyrokinetic simulations of turbulent transport: size scaling and chaotic behaviour

    International Nuclear Information System (INIS)

    Villard, L; Brunner, S; Casati, A; Aghdam, S Khosh; Lapillonne, X; McMillan, B F; Bottino, A; Dannert, T; Goerler, T; Hatzky, R; Jenko, F; Merz, F; Chowdhury, J; Ganesh, R; Garbet, X; Grandgirard, V; Latu, G; Sarazin, Y; Idomura, Y; Jolliet, S

    2010-01-01

    Important steps towards the understanding of turbulent transport have been made with the development of the gyrokinetic framework for describing turbulence and with the emergence of numerical codes able to solve the set of gyrokinetic equations. This paper presents some of the main recent advances in gyrokinetic theory and computing of turbulence. Solving 5D gyrokinetic equations for each species requires state-of-the-art high performance computing techniques involving massively parallel computers and parallel scalable algorithms. The various numerical schemes that have been explored until now, Lagrangian, Eulerian and semi-Lagrangian, each have their advantages and drawbacks. A past controversy regarding the finite size effect (finite ρ * ) in ITG turbulence has now been resolved. It has triggered an intensive benchmarking effort and careful examination of the convergence properties of the different numerical approaches. Now, both Eulerian and Lagrangian global codes are shown to agree and to converge to the flux-tube result in the ρ * → 0 limit. It is found, however, that an appropriate treatment of geometrical terms is necessary: inconsistent approximations that are sometimes used can lead to important discrepancies. Turbulent processes are characterized by a chaotic behaviour, often accompanied by bursts and avalanches. Performing ensemble averages of statistically independent simulations, starting from different initial conditions, is presented as a way to assess the intrinsic variability of turbulent fluxes and obtain reliable estimates of the standard deviation. Further developments concerning non-adiabatic electron dynamics around mode-rational surfaces and electromagnetic effects are discussed.

  12. Efficient and robust pupil size and blink estimation from near-field video sequences for human-machine interaction.

    Science.gov (United States)

    Chen, Siyuan; Epps, Julien

    2014-12-01

    Monitoring pupil and blink dynamics has applications in cognitive load measurement during human-machine interaction. However, accurate, efficient, and robust pupil size and blink estimation pose significant challenges to the efficacy of real-time applications due to the variability of eye images, hence to date, require manual intervention for fine tuning of parameters. In this paper, a novel self-tuning threshold method, which is applicable to any infrared-illuminated eye images without a tuning parameter, is proposed for segmenting the pupil from the background images recorded by a low cost webcam placed near the eye. A convex hull and a dual-ellipse fitting method are also proposed to select pupil boundary points and to detect the eyelid occlusion state. Experimental results on a realistic video dataset show that the measurement accuracy using the proposed methods is higher than that of widely used manually tuned parameter methods or fixed parameter methods. Importantly, it demonstrates convenience and robustness for an accurate and fast estimate of eye activity in the presence of variations due to different users, task types, load, and environments. Cognitive load measurement in human-machine interaction can benefit from this computationally efficient implementation without requiring a threshold calibration beforehand. Thus, one can envisage a mini IR camera embedded in a lightweight glasses frame, like Google Glass, for convenient applications of real-time adaptive aiding and task management in the future.

  13. The Large Scale Machine Learning in an Artificial Society: Prediction of the Ebola Outbreak in Beijing

    Directory of Open Access Journals (Sweden)

    Peng Zhang

    2015-01-01

    Full Text Available Ebola virus disease (EVD distinguishes its feature as high infectivity and mortality. Thus, it is urgent for governments to draw up emergency plans against Ebola. However, it is hard to predict the possible epidemic situations in practice. Luckily, in recent years, computational experiments based on artificial society appeared, providing a new approach to study the propagation of EVD and analyze the corresponding interventions. Therefore, the rationality of artificial society is the key to the accuracy and reliability of experiment results. Individuals’ behaviors along with travel mode directly affect the propagation among individuals. Firstly, artificial Beijing is reconstructed based on geodemographics and machine learning is involved to optimize individuals’ behaviors. Meanwhile, Ebola course model and propagation model are built, according to the parameters in West Africa. Subsequently, propagation mechanism of EVD is analyzed, epidemic scenario is predicted, and corresponding interventions are presented. Finally, by simulating the emergency responses of Chinese government, the conclusion is finally drawn that Ebola is impossible to outbreak in large scale in the city of Beijing.

  14. Subgrid-scale scalar flux modelling based on optimal estimation theory and machine-learning procedures

    Science.gov (United States)

    Vollant, A.; Balarac, G.; Corre, C.

    2017-09-01

    New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.

  15. Scale economies and optimal size in the Swiss gas distribution sector

    International Nuclear Information System (INIS)

    Alaeifar, Mozhgan; Farsi, Mehdi; Filippini, Massimo

    2014-01-01

    This paper studies the cost structure of Swiss gas distribution utilities. Several econometric models are applied to a panel of 26 companies over 1996–2000. Our main objective is to estimate the optimal size and scale economies of the industry and to study their possible variation with respect to network characteristics. The results indicate the presence of unexploited scale economies. However, very large companies in the sample and companies with a disproportionate mixture of output and density present an exception. Furthermore, the estimated optimal size for majority of companies in the sample has shown a value far greater than the actual size, suggesting remarkable efficiency gains by reorganization of the industry. The results also highlight the effect of customer density on optimal size. Networks with higher density or greater complexity have a lower optimal size. - highlights: • Presence of unexploited scale economies for small and medium sized companies. • Scale economies vary considerably with customer density. • Higher density or greater complexity is associated with lower optimal size. • Optimal size varies across the companies through unobserved heterogeneity. • Firms with low density can gain more from expanding firm size

  16. Tipping the scales: Evolution of the allometric slope independent of average trait size.

    Science.gov (United States)

    Stillwell, R Craig; Shingleton, Alexander W; Dworkin, Ian; Frankino, W Anthony

    2016-02-01

    The scaling of body parts is central to the expression of morphology across body sizes and to the generation of morphological diversity within and among species. Although patterns of scaling-relationship evolution have been well documented for over one hundred years, little is known regarding how selection acts to generate these patterns. In part, this is because it is unclear the extent to which the elements of log-linear scaling relationships-the intercept or mean trait size and the slope-can evolve independently. Here, using the wing-body size scaling relationship in Drosophila melanogaster as an empirical model, we use artificial selection to demonstrate that the slope of a morphological scaling relationship between an organ (the wing) and body size can evolve independently of mean organ or body size. We discuss our findings in the context of how selection likely operates on morphological scaling relationships in nature, the developmental basis for evolved changes in scaling, and the general approach of using individual-based selection experiments to study the expression and evolution of morphological scaling. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.

  17. Ultraprecision machining. Cho seimitsu kako

    Energy Technology Data Exchange (ETDEWEB)

    Suga, T [The Univ. of Tokyo, Tokyo (Japan). Research Center for Advanced Science and Technology

    1992-10-05

    It is said that the image of ultraprecision improved from 0.1[mu]m to 0.01[mu]m within recent years. Ultraprecision machining is a production technology which forms what is called nanotechnology with ultraprecision measuring and ultraprecision control. Accuracy means average machined sizes close to a required value, namely the deflection errors are small; precision means the scattered errors of machined sizes agree very closely. The errors of machining are related to both of the above errors and ultraprecision means the combined errors are very small. In the present ultraprecision machining, the relative precision to the size of a machined object is said to be in the order of 10[sup -6]. The flatness of silicon wafers is usually less than 0.5[mu]m. It is the fact that the appearance of atomic scale machining is awaited as the limit of ultraprecision machining. The machining of removing and adding atomic units using scanning probe microscopes are expected to reach the limit actually. 2 refs.

  18. Verification of Gyrokinetic Particle of Turbulent Simulation of Device Size Scaling Transport

    Institute of Scientific and Technical Information of China (English)

    LIN Zhihong; S. ETHIER; T. S. HAHM; W. M. TANG

    2012-01-01

    Verification and historical perspective are presented on the gyrokinetic particle simulations that discovered the device size scaling of turbulent transport and indentified the geometry model as the source of the long-standing disagreement between gyrokinetic particle and continuum simulations.

  19. How acoustic signals scale with individual body size: common trends across diverse taxa

    OpenAIRE

    Rafael L. Rodríguez; Marcelo Araya-Salas; David A. Gray; Michael S. Reichert; Laurel B. Symes; Matthew R. Wilkins; Rebecca J. Safran; Gerlinde Höbel

    2015-01-01

    We use allometric analysis to explore how acoustic signals scale on individual body size and to test hypotheses about the factors shaping relationships between signals and body size. Across case studies spanning birds, crickets, tree crickets, and tree frogs, we find that most signal traits had low coefficients of variation, shallow allometric scalings, and little dispersion around the allometric function. We relate variation in these measures to the shape of mate preferences and the level of...

  20. Scaling of lifting forces in relation to object size in whole body lifting

    NARCIS (Netherlands)

    Kingma, I.; van Dieen, J.H.; Toussaint, H.M.

    2005-01-01

    Subjects prepare for a whole body lifting movement by adjusting their posture and scaling their lifting forces to the expected object weight. The expectancy is based on visual and haptic size cues. This study aimed to find out whether lifting force overshoots related to object size cues disappear or

  1. On-line transient stability assessment of large-scale power systems by using ball vector machines

    International Nuclear Information System (INIS)

    Mohammadi, M.; Gharehpetian, G.B.

    2010-01-01

    In this paper ball vector machine (BVM) has been used for on-line transient stability assessment of large-scale power systems. To classify the system transient security status, a BVM has been trained for all contingencies. The proposed BVM based security assessment algorithm has very small training time and space in comparison with artificial neural networks (ANN), support vector machines (SVM) and other machine learning based algorithms. In addition, the proposed algorithm has less support vectors (SV) and therefore is faster than existing algorithms for on-line applications. One of the main points, to apply a machine learning method is feature selection. In this paper, a new Decision Tree (DT) based feature selection technique has been presented. The proposed BVM based algorithm has been applied to New England 39-bus power system. The simulation results show the effectiveness and the stability of the proposed method for on-line transient stability assessment procedure of large-scale power system. The proposed feature selection algorithm has been compared with different feature selection algorithms. The simulation results demonstrate the effectiveness of the proposed feature algorithm.

  2. Size-density scaling in protists and the links between consumer-resource interaction parameters.

    Science.gov (United States)

    DeLong, John P; Vasseur, David A

    2012-11-01

    Recent work indicates that the interaction between body-size-dependent demographic processes can generate macroecological patterns such as the scaling of population density with body size. In this study, we evaluate this possibility for grazing protists and also test whether demographic parameters in these models are correlated after controlling for body size. We compiled data on the body-size dependence of consumer-resource interactions and population density for heterotrophic protists grazing algae in laboratory studies. We then used nested dynamic models to predict both the height and slope of the scaling relationship between population density and body size for these protists. We also controlled for consumer size and assessed links between model parameters. Finally, we used the models and the parameter estimates to assess the individual- and population-level dependence of resource use on body-size and prey-size selection. The predicted size-density scaling for all models matched closely to the observed scaling, and the simplest model was sufficient to predict the pattern. Variation around the mean size-density scaling relationship may be generated by variation in prey productivity and area of capture, but residuals are relatively insensitive to variation in prey size selection. After controlling for body size, many consumer-resource interaction parameters were correlated, and a positive correlation between residual prey size selection and conversion efficiency neutralizes the apparent fitness advantage of taking large prey. Our results indicate that widespread community-level patterns can be explained with simple population models that apply consistently across a range of sizes. They also indicate that the parameter space governing the dynamics and the steady states in these systems is structured such that some parts of the parameter space are unlikely to represent real systems. Finally, predator-prey size ratios represent a kind of conundrum, because they are

  3. Meter-scale Urban Land Cover Mapping for EPA EnviroAtlas Using Machine Learning and OBIA Remote Sensing Techniques

    Science.gov (United States)

    Pilant, A. N.; Baynes, J.; Dannenberg, M.; Riegel, J.; Rudder, C.; Endres, K.

    2013-12-01

    US EPA EnviroAtlas is an online collection of tools and resources that provides geospatial data, maps, research, and analysis on the relationships between nature, people, health, and the economy (http://www.epa.gov/research/enviroatlas/index.htm). Using EnviroAtlas, you can see and explore information related to the benefits (e.g., ecosystem services) that humans receive from nature, including clean air, clean and plentiful water, natural hazard mitigation, biodiversity conservation, food, fuel, and materials, recreational opportunities, and cultural and aesthetic value. EPA developed several urban land cover maps at very high spatial resolution (one-meter pixel size) for a portion of EnviroAtlas devoted to urban studies. This urban mapping effort supported analysis of relations among land cover, human health and demographics at the US Census Block Group level. Supervised classification of 2010 USDA NAIP (National Agricultural Imagery Program) digital aerial photos produced eight-class land cover maps for several cities, including Durham, NC, Portland, ME, Tampa, FL, New Bedford, MA, Pittsburgh, PA, Portland, OR, and Milwaukee, WI. Semi-automated feature extraction methods were used to classify the NAIP imagery: genetic algorithms/machine learning, random forest, and object-based image analysis (OBIA). In this presentation we describe the image processing and fuzzy accuracy assessment methods used, and report on some sustainability and ecosystem service metrics computed using this land cover as input (e.g., carbon sequestration from USFS iTREE model; health and demographics in relation to road buffer forest width). We also discuss the land cover classification schema (a modified Anderson Level 1 after the National Land Cover Data (NLCD)), and offer some observations on lessons learned. Meter-scale urban land cover in Portland, OR overlaid on NAIP aerial photo. Streets, buildings and individual trees are identifiable.

  4. Percolation through voids around overlapping spheres: A dynamically based finite-size scaling analysis

    Science.gov (United States)

    Priour, D. J.

    2014-01-01

    The percolation threshold for flow or conduction through voids surrounding randomly placed spheres is calculated. With large-scale Monte Carlo simulations, we give a rigorous continuum treatment to the geometry of the impenetrable spheres and the spaces between them. To properly exploit finite-size scaling, we examine multiple systems of differing sizes, with suitable averaging over disorder, and extrapolate to the thermodynamic limit. An order parameter based on the statistical sampling of stochastically driven dynamical excursions and amenable to finite-size scaling analysis is defined, calculated for various system sizes, and used to determine the critical volume fraction ϕc=0.0317±0.0004 and the correlation length exponent ν =0.92±0.05.

  5. Vascularity and grey-scale sonographic features of normal cervical lymph nodes: variations with nodal size

    International Nuclear Information System (INIS)

    Ying, Michael; Ahuja, Anil; Brook, Fiona; Metreweli, Constantine

    2001-01-01

    AIM: This study was undertaken to investigate variations in the vascularity and grey-scale sonographic features of cervical lymph nodes with their size. MATERIALS AND METHODS: High resolution grey-scale sonography and power Doppler sonography were performed in 1133 cervical nodes in 109 volunteers who had a sonographic examination of the neck. Standardized parameters were used in power Doppler sonography. RESULTS: About 90% of lymph nodes with a maximum transverse diameter greater than 5 mm showed vascularity and an echogenic hilus. Smaller nodes were less likely to show vascularity and an echogenic hilus. As the size of the lymph nodes increased, the intranodal blood flow velocity increased significantly (P 0.05). CONCLUSIONS: The findings provide a baseline for grey-scale and power Doppler sonography of normal cervical lymph nodes. Sonologists will find varying vascularity and grey-scale appearances when encountering nodes of different sizes. Ying, M. et al. (2001)

  6. Visual Thing Recognition with Binary Scale-Invariant Feature Transform and Support Vector Machine Classifiers Using Color Information

    OpenAIRE

    Wei-Jong Yang; Wei-Hau Du; Pau-Choo Chang; Jar-Ferr Yang; Pi-Hsia Hung

    2017-01-01

    The demands of smart visual thing recognition in various devices have been increased rapidly for daily smart production, living and learning systems in recent years. This paper proposed a visual thing recognition system, which combines binary scale-invariant feature transform (SIFT), bag of words model (BoW), and support vector machine (SVM) by using color information. Since the traditional SIFT features and SVM classifiers only use the gray information, color information is still an importan...

  7. Investigations of grain size dependent sediment transport phenomena on multiple scales

    Science.gov (United States)

    Thaxton, Christopher S.

    Sediment transport processes in coastal and fluvial environments resulting from disturbances such as urbanization, mining, agriculture, military operations, and climatic change have significant impact on local, regional, and global environments. Primarily, these impacts include the erosion and deposition of sediment, channel network modification, reduction in downstream water quality, and the delivery of chemical contaminants. The scale and spatial distribution of these effects are largely attributable to the size distribution of the sediment grains that become eligible for transport. An improved understanding of advective and diffusive grain-size dependent sediment transport phenomena will lead to the development of more accurate predictive models and more effective control measures. To this end, three studies were performed that investigated grain-size dependent sediment transport on three different scales. Discrete particle computer simulations of sheet flow bedload transport on the scale of 0.1--100 millimeters were performed on a heterogeneous population of grains of various grain sizes. The relative transport rates and diffusivities of grains under both oscillatory and uniform, steady flow conditions were quantified. These findings suggest that boundary layer formalisms should describe surface roughness through a representative grain size that is functionally dependent on the applied flow parameters. On the scale of 1--10m, experiments were performed to quantify the hydrodynamics and sediment capture efficiency of various baffles installed in a sediment retention pond, a commonly used sedimentation control measure in watershed applications. Analysis indicates that an optimum sediment capture effectiveness may be achieved based on baffle permeability, pond geometry and flow rate. Finally, on the scale of 10--1,000m, a distributed, bivariate watershed terain evolution module was developed within GRASS GIS. Simulation results for variable grain sizes and for

  8. In-situ monitoring of blood glucose level for dialysis machine by AAA-battery-size ATR Fourier spectroscopy

    Science.gov (United States)

    Hosono, Satsuki; Sato, Shun; Ishida, Akane; Suzuki, Yo; Inohara, Daichi; Nogo, Kosuke; Abeygunawardhana, Pradeep K.; Suzuki, Satoru; Nishiyama, Akira; Wada, Kenji; Ishimaru, Ichiro

    2015-07-01

    For blood glucose level measurement of dialysis machines, we proposed AAA-battery-size ATR (Attenuated total reflection) Fourier spectroscopy in middle infrared light region. The proposed one-shot Fourier spectroscopic imaging is a near-common path and spatial phase-shift interferometer with high time resolution. Because numerous number of spectral data that is 60 (= camera frame rare e.g. 60[Hz]) multiplied by pixel number could be obtained in 1[sec.], statistical-averaging improvement realize high-accurate spectral measurement. We evaluated the quantitative accuracy of our proposed method for measuring glucose concentration in near-infrared light region with liquid cells. We confirmed that absorbance at 1600[nm] had high correlations with glucose concentrations (correlation coefficient: 0.92). But to measure whole-blood, complex light phenomenon caused from red blood cells, that is scattering and multiple reflection or so, deteriorate spectral data. Thus, we also proposed the ultrasound-assisted spectroscopic imaging that traps particles at standing-wave node. Thus, if ATR prism is oscillated mechanically, anti-node area is generated around evanescent light field on prism surface. By elimination complex light phenomenon of red blood cells, glucose concentration in whole-blood will be quantify with high accuracy. In this report, we successfully trapped red blood cells in normal saline solution with ultrasonic standing wave (frequency: 2[MHz]).

  9. Estimating Most Productive Scale Size in Data Envelopment Analysis with Integer Value Data

    Science.gov (United States)

    Dwi Sari, Yunita; Angria S, Layla; Efendi, Syahril; Zarlis, Muhammad

    2018-01-01

    The most productive scale size (MPSS) is a measurement that states how resources should be organized and utilized to achieve optimal results. The most productive scale size (MPSS) can be used as a benchmark for the success of an industry or company in producing goods or services. To estimate the most productive scale size (MPSS), each decision making unit (DMU) should pay attention the level of input-output efficiency, by data envelopment analysis (DEA) method decision making unit (DMU) can identify units used as references that can help to find the cause and solution from inefficiencies can optimize productivity that main advantage in managerial applications. Therefore, data envelopment analysis (DEA) is chosen to estimating most productive scale size (MPSS) that will focus on the input of integer value data with the CCR model and the BCC model. The purpose of this research is to find the best solution for estimating most productive scale size (MPSS) with input of integer value data in data envelopment analysis (DEA) method.

  10. A Mathematical Model for Scheduling a Batch Processing Machine with Multiple Incompatible Job Families, Non-identical Job dimensions, Non-identical Job sizes, Non-agreeable release times and due dates

    International Nuclear Information System (INIS)

    Ramasubramaniam, M; Mathirajan, M

    2013-01-01

    The paper addresses the problem scheduling a batch processing machine with multiple incompatible job families, non-identical job dimensions, non-identical job sizes and non-agreeable release dates to minimize makespan. The research problem is solved by proposing a mixed integer programming model that appropriately takes into account the parameters considered in the problem. The proposed is validated using a numerical example. The experiment conducted show that the model can pose significant difficulties in solving the large scale instances. The paper concludes by giving the scope for future work and some alternative approaches one can use for solving these class of problems.

  11. Conceptual design of current lead for large scale high temperature superconducting rotating machine

    International Nuclear Information System (INIS)

    Le, T. D.; Kim, J. H.; Park, S. I.; Kim, H. M.

    2014-01-01

    High-temperature superconducting (HTS) rotating machines always require an electric current of from several hundreds to several thousand amperes to be led from outside into cold region of the field coil. Heat losses through the current leads then assume tremendous importance. Consequently, it is necessary to acquire optimal design for the leads which would achieve minimum heat loss during operation of machines for a given electrical current. In this paper, conduction cooled current lead type of 10 MW-Class HTS rotating machine will be chosen, a conceptual design will be discussed and performed relied on the least heat lost estimation between conventional metal lead and partially HTS lead. In addition, steady-state thermal characteristic of each one also is considered and illustrated.

  12. Electrochemical machining of internal built-up surfaces of large-sized vessels for nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Ryabchenko, N N; Pulin, V Ya [Vsesoyuznyj Proektno-Tekhnologicheskij Inst. Atomnogo Mashinostroeniya i Kotlostroeniya, Rostov-na-Donu (USSR)

    1977-01-01

    Electrochemical machining (ECM) has been employed for finishing of mechanically processed inner surfaces of large lateral parts of construction bodies with welded 0Kh18N10T steel overlayer. The finishing technology developed reduces the surface roughness from 10 mcm to the standard 2.5 mcm at the efficiency of machining of 2-4 m/sup 2/ per hour.

  13. DYNAMIC TENSILE TESTING WITH A LARGE SCALE 33 MJ ROTATING DISK IMPACT MACHINE

    OpenAIRE

    Kussmaul , K.; Zimmermann , C.; Issler , W.

    1985-01-01

    A recently completed testing machine for dynamic tensile tests is described. The machine consists essentially of a pendulum which holds the specimen and a large steel disk with a double striking nose fixed to its circumference. Disk diameter measures 2000 mm, while its mass is 6400 kg. The specimens to be tested are tensile specimens with a diameter of up to 20 mm and 300 mm length or CT 15 specimens at various temperatures. Loading velocity ranges from 1 to 150 m/s. The process of specimen-n...

  14. Scaling range sizes to threats for robust predictions of risks to biodiversity.

    Science.gov (United States)

    Keith, David A; Akçakaya, H Resit; Murray, Nicholas J

    2018-04-01

    Assessments of risk to biodiversity often rely on spatial distributions of species and ecosystems. Range-size metrics used extensively in these assessments, such as area of occupancy (AOO), are sensitive to measurement scale, prompting proposals to measure them at finer scales or at different scales based on the shape of the distribution or ecological characteristics of the biota. Despite its dominant role in red-list assessments for decades, appropriate spatial scales of AOO for predicting risks of species' extinction or ecosystem collapse remain untested and contentious. There are no quantitative evaluations of the scale-sensitivity of AOO as a predictor of risks, the relationship between optimal AOO scale and threat scale, or the effect of grid uncertainty. We used stochastic simulation models to explore risks to ecosystems and species with clustered, dispersed, and linear distribution patterns subject to regimes of threat events with different frequency and spatial extent. Area of occupancy was an accurate predictor of risk (0.81<|r|<0.98) and performed optimally when measured with grid cells 0.1-1.0 times the largest plausible area threatened by an event. Contrary to previous assertions, estimates of AOO at these relatively coarse scales were better predictors of risk than finer-scale estimates of AOO (e.g., when measurement cells are <1% of the area of the largest threat). The optimal scale depended on the spatial scales of threats more than the shape or size of biotic distributions. Although we found appreciable potential for grid-measurement errors, current IUCN guidelines for estimating AOO neutralize geometric uncertainty and incorporate effective scaling procedures for assessing risks posed by landscape-scale threats to species and ecosystems. © 2017 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.

  15. Equilibrium and off-equilibrium trap-size scaling in one-dimensional ultracold bosonic gases

    International Nuclear Information System (INIS)

    Campostrini, Massimo; Vicari, Ettore

    2010-01-01

    We study some aspects of equilibrium and off-equilibrium quantum dynamics of dilute bosonic gases in the presence of a trapping potential. We consider systems with a fixed number of particles and study their scaling behavior with increasing the trap size. We focus on one-dimensional bosonic systems, such as gases described by the Lieb-Liniger model and its Tonks-Girardeau limit of impenetrable bosons, and gases constrained in optical lattices as described by the Bose-Hubbard model. We study their quantum (zero-temperature) behavior at equilibrium and off equilibrium during the unitary time evolution arising from changes of the trapping potential, which may be instantaneous or described by a power-law time dependence, starting from the equilibrium ground state for an initial trap size. Renormalization-group scaling arguments and analytical and numerical calculations show that the trap-size dependence of the equilibrium and off-equilibrium dynamics can be cast in the form of a trap-size scaling in the low-density regime, characterized by universal power laws of the trap size, in dilute gases with repulsive contact interactions and lattice systems described by the Bose-Hubbard model. The scaling functions corresponding to several physically interesting observables are computed. Our results are of experimental relevance for systems of cold atomic gases trapped by tunable confining potentials.

  16. Trap-size scaling in confined-particle systems at quantum transitions

    International Nuclear Information System (INIS)

    Campostrini, Massimo; Vicari, Ettore

    2010-01-01

    We develop a trap-size scaling theory for trapped particle systems at quantum transitions. As a theoretical laboratory, we consider a quantum XY chain in an external transverse field acting as a trap for the spinless fermions of its quadratic Hamiltonian representation. We discuss trap-size scaling at the Mott insulator to superfluid transition in the Bose-Hubbard model. We present exact and accurate numerical results for the XY chain and for the low-density Mott transition in the hard-core limit of the one-dimensional Bose-Hubbard model. Our results are relevant for systems of cold atomic gases in optical lattices.

  17. Nonstandard scaling law of fluctuations in finite-size systems of globally coupled oscillators.

    Science.gov (United States)

    Nishikawa, Isao; Tanaka, Gouhei; Aihara, Kazuyuki

    2013-08-01

    Universal scaling laws form one of the central issues in physics. A nonstandard scaling law or a breakdown of a standard scaling law, on the other hand, can often lead to the finding of a new universality class in physical systems. Recently, we found that a statistical quantity related to fluctuations follows a nonstandard scaling law with respect to the system size in a synchronized state of globally coupled nonidentical phase oscillators [I. Nishikawa et al., Chaos 22, 013133 (2012)]. However, it is still unclear how widely this nonstandard scaling law is observed. In the present paper, we discuss the conditions required for the unusual scaling law in globally coupled oscillator systems and validate the conditions by numerical simulations of several different models.

  18. Towards large-scale FAME-based bacterial species identification using machine learning techniques.

    Science.gov (United States)

    Slabbinck, Bram; De Baets, Bernard; Dawyndt, Peter; De Vos, Paul

    2009-05-01

    In the last decade, bacterial taxonomy witnessed a huge expansion. The swift pace of bacterial species (re-)definitions has a serious impact on the accuracy and completeness of first-line identification methods. Consequently, back-end identification libraries need to be synchronized with the List of Prokaryotic names with Standing in Nomenclature. In this study, we focus on bacterial fatty acid methyl ester (FAME) profiling as a broadly used first-line identification method. From the BAME@LMG database, we have selected FAME profiles of individual strains belonging to the genera Bacillus, Paenibacillus and Pseudomonas. Only those profiles resulting from standard growth conditions have been retained. The corresponding data set covers 74, 44 and 95 validly published bacterial species, respectively, represented by 961, 378 and 1673 standard FAME profiles. Through the application of machine learning techniques in a supervised strategy, different computational models have been built for genus and species identification. Three techniques have been considered: artificial neural networks, random forests and support vector machines. Nearly perfect identification has been achieved at genus level. Notwithstanding the known limited discriminative power of FAME analysis for species identification, the computational models have resulted in good species identification results for the three genera. For Bacillus, Paenibacillus and Pseudomonas, random forests have resulted in sensitivity values, respectively, 0.847, 0.901 and 0.708. The random forests models outperform those of the other machine learning techniques. Moreover, our machine learning approach also outperformed the Sherlock MIS (MIDI Inc., Newark, DE, USA). These results show that machine learning proves very useful for FAME-based bacterial species identification. Besides good bacterial identification at species level, speed and ease of taxonomic synchronization are major advantages of this computational species

  19. Theory of critical phenomena in finite-size systems scaling and quantum effects

    CERN Document Server

    Brankov, Jordan G; Tonchev, Nicholai S

    2000-01-01

    The aim of this book is to familiarise the reader with the rich collection of ideas, methods and results available in the theory of critical phenomena in systems with confined geometry. The existence of universal features of the finite-size effects arising due to highly correlated classical or quantum fluctuations is explained by the finite-size scaling theory. This theory (1) offers an interpretation of experimental results on finite-size effects in real systems; (2) gives the most reliable tool for extrapolation to the thermodynamic limit of data obtained by computer simulations; (3) reveals

  20. Studying time of flight imaging through scattering media across multiple size scales (Conference Presentation)

    Science.gov (United States)

    Velten, Andreas

    2017-05-01

    Light scattering is a primary obstacle to optical imaging in a variety of different environments and across many size and time scales. Scattering complicates imaging on large scales when imaging through the atmosphere when imaging from airborne or space borne platforms, through marine fog, or through fog and dust in vehicle navigation, for example in self driving cars. On smaller scales, scattering is the major obstacle when imaging through human tissue in biomedical applications. Despite the large variety of participating materials and size scales, light transport in all these environments is usually described with very similar scattering models that are defined by the same small set of parameters, including scattering and absorption length and phase function. We attempt a study of scattering and methods of imaging through scattering across different scales and media, particularly with respect to the use of time of flight information. We can show that using time of flight, in addition to spatial information, provides distinct advantages in scattering environments. By performing a comparative study of scattering across scales and media, we are able to suggest scale models for scattering environments to aid lab research. We also can transfer knowledge and methodology between different fields.

  1. Size Reduction Techniques for Large Scale Permanent Magnet Generators in Wind Turbines

    Science.gov (United States)

    Khazdozian, Helena; Hadimani, Ravi; Jiles, David

    2015-03-01

    Increased wind penetration is necessary to reduce U.S. dependence on fossil fuels, combat climate change and increase national energy security. The U.S Department of Energy has recommended large scale and offshore wind turbines to achieve 20% wind electricity generation by 2030. Currently, geared doubly-fed induction generators (DFIGs) are typically employed in the drivetrain for conversion of mechanical to electrical energy. Yet, gearboxes account for the greatest downtime of wind turbines, decreasing reliability and contributing to loss of profit. Direct drive permanent magnet generators (PMGs) offer a reliable alternative to DFIGs by eliminating the gearbox. However, PMGs scale up in size and weight much more rapidly than DFIGs as rated power is increased, presenting significant challenges for large scale wind turbine application. Thus, size reduction techniques are needed for viability of PMGs in large scale wind turbines. Two size reduction techniques are presented. It is demonstrated that 25% size reduction of a 10MW PMG is possible with a high remanence theoretical permanent magnet. Additionally, the use of a Halbach cylinder in an outer rotor PMG is investigated to focus magnetic flux over the rotor surface in order to increase torque. This work was supported by the National Science Foundation under Grant No. 1069283 and a Barbara and James Palmer Endowment at Iowa State University.

  2. A finite size scaling test of an SU(2) gauge-spin system

    International Nuclear Information System (INIS)

    Tomiya, M.; Hattori, T.

    1984-01-01

    We calculate the correlation functions in the SU(2) gauge-spin system with spins in the fundamental representation. We analyze the result making use of finite size scaling. There is a possibility that there are no second order phase transition lines in this model, contrary to previous assertions. (orig.)

  3. Finite-size scaling for quantum chains with an oscillatory energy gap

    International Nuclear Information System (INIS)

    Hoeger, C.; Gehlen, G. von; Rittenberg, V.

    1984-07-01

    We show that the existence of zeroes of the energy gap for finite quantum chains is related to a nonvanishing wavevector. Finite-size scaling ansaetze are formulated for incommensurable and oscillatory structures. The ansaetze are verified in the one-dimensional XY model in a transverse field. (orig.)

  4. Economies of scale and trends in the size of southern forest industries

    Science.gov (United States)

    James E. Granskog

    1978-01-01

    In each of the major southern forest industries, the trend has been toward achieving economies of scale, that is, to build larger production units to reduce unit costs. Current minimum efficient plant size estimated by survivor analysis is 1,000 tons per day capacity for sulfate pulping, 100 million square feet (3/8- inch basis) annual capacity for softwood plywood,...

  5. A Multi-scale, Multi-Model, Machine-Learning Solar Forecasting Technology

    Energy Technology Data Exchange (ETDEWEB)

    Hamann, Hendrik F. [IBM, Yorktown Heights, NY (United States). Thomas J. Watson Research Center

    2017-05-31

    The goal of the project was the development and demonstration of a significantly improved solar forecasting technology (short: Watt-sun), which leverages new big data processing technologies and machine-learnt blending between different models and forecast systems. The technology aimed demonstrating major advances in accuracy as measured by existing and new metrics which themselves were developed as part of this project. Finally, the team worked with Independent System Operators (ISOs) and utilities to integrate the forecasts into their operations.

  6. Law machines: scale models, forensic materiality and the making of modern patent law.

    Science.gov (United States)

    Pottage, Alain

    2011-10-01

    Early US patent law was machine made. Before the Patent Office took on the function of examining patent applications in 1836, questions of novelty and priority were determined in court, within the forum of the infringement action. And at all levels of litigation, from the circuit courts up to the Supreme Court, working models were the media through which doctrine, evidence and argument were made legible, communicated and interpreted. A model could be set on a table, pointed at, picked up, rotated or upended so as to display a point of interest to a particular audience within the courtroom, and, crucially, set in motion to reveal the 'mode of operation' of a machine. The immediate object of demonstration was to distinguish the intangible invention from its tangible embodiment, but models also'machined' patent law itself. Demonstrations of patent claims with models articulated and resolved a set of conceptual tensions that still make the definition and apprehension of the invention difficult, even today, but they resolved these tensions in the register of materiality, performativity and visibility, rather than the register of conceptuality. The story of models tells us something about how inventions emerge and subsist within the context of patent litigation and patent doctrine, and it offers a starting point for renewed reflection on the question of how technology becomes property.

  7. Support Vector Machines Trained with Evolutionary Algorithms Employing Kernel Adatron for Large Scale Classification of Protein Structures.

    Science.gov (United States)

    Arana-Daniel, Nancy; Gallegos, Alberto A; López-Franco, Carlos; Alanís, Alma Y; Morales, Jacob; López-Franco, Adriana

    2016-01-01

    With the increasing power of computers, the amount of data that can be processed in small periods of time has grown exponentially, as has the importance of classifying large-scale data efficiently. Support vector machines have shown good results classifying large amounts of high-dimensional data, such as data generated by protein structure prediction, spam recognition, medical diagnosis, optical character recognition and text classification, etc. Most state of the art approaches for large-scale learning use traditional optimization methods, such as quadratic programming or gradient descent, which makes the use of evolutionary algorithms for training support vector machines an area to be explored. The present paper proposes an approach that is simple to implement based on evolutionary algorithms and Kernel-Adatron for solving large-scale classification problems, focusing on protein structure prediction. The functional properties of proteins depend upon their three-dimensional structures. Knowing the structures of proteins is crucial for biology and can lead to improvements in areas such as medicine, agriculture and biofuels.

  8. Is the number and size of scales in Liolaemus lizards driven by climate?

    Science.gov (United States)

    José Tulli, María; Cruz, Félix B

    2018-05-03

    Ectothermic vertebrates are sensitive to thermal fluctuations in the environments where they occur. To buffer these fluctuations, ectotherms use different strategies, including the integument, which is a barrier that minimizes temperature exchange between the inner body and the surrounding air. In lizards, this barrier is constituted by keratinized scales of variable size, shape and texture, and its main function is protection, water loss avoidance and thermoregulation. The size of scales in lizards has been proposed to vary in relation to climatic gradients; however, it has also been observed that in some groups of Iguanian lizards could be related to phylogeny. Thus, here, we studied the area and number of scales (dorsal and ventral) of 61 species of Liolaemus lizards distributed in a broad latitudinal and altitudinal gradient to determine the nature of the variation of the scales with climate, and found that the number and size of scales are related to climatic variables, such as temperature and geographical variables as altitude. The evolutionary process that better explained how these morphological variables evolved was the Ornstein-Uhlenbeck model. The number of scales seemed to be related to common ancestry, whereas dorsal and ventral scale areas seemed to vary as a consequence of ecological traits. In fact, the ventral area is less exposed to climate conditions such as ultraviolet radiation or wind and is thus under less pressure to change in response to alterations in external conditions. It is possible that scale ornamentation such as keels and granulosity may bring some more information in this regard. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  9. Scale and size effects in dynamic fracture of concretes and rocks

    Directory of Open Access Journals (Sweden)

    Petrov Y.

    2015-01-01

    Full Text Available Structural-temporal approach based on the notion of incubation time is used for interpretation of strain-rate effects in the fracture process of concretes and rocks. It is established that temporal dependences of concretes and rocks are calculated by the incubation time criterion. Experimentally observed different relations between ultimate stresses of concrete and mortar in static and dynamic conditions are explained. It is obtained that compressive strength of mortar at a low strain rate is greater than that of concrete, but at a high strain rate the opposite is true. Influence of confinement pressure on the mechanism of dynamic strength for concretes and rocks is discussed. Both size effect and scale effect for concrete and rocks samples subjected to impact loading are analyzed. Statistical nature of a size effect contrasts to a scale effect that is related to the definition of a spatio-temporal representative volume determining the fracture event on the given scale level.

  10. Scale-Dependent Habitat Selection and Size-Based Dominance in Adult Male American Alligators.

    Directory of Open Access Journals (Sweden)

    Bradley A Strickland

    Full Text Available Habitat selection is an active behavioral process that may vary across spatial and temporal scales. Animals choose an area of primary utilization (i.e., home range then make decisions focused on resource needs within patches. Dominance may affect the spatial distribution of conspecifics and concomitant habitat selection. Size-dependent social dominance hierarchies have been documented in captive alligators, but evidence is lacking from wild populations. We studied habitat selection for adult male American alligators (Alligator mississippiensis; n = 17 on the Pearl River in central Mississippi, USA, to test whether habitat selection was scale-dependent and individual resource selectivity was a function of conspecific body size. We used K-select analysis to quantify selection at the home range scale and patches within the home range to determine selection congruency and important habitat variables. In addition, we used linear models to determine if body size was related to selection patterns and strengths. Our results indicated habitat selection of adult male alligators was a scale-dependent process. Alligators demonstrated greater overall selection for habitat variables at the patch level and less at the home range level, suggesting resources may not be limited when selecting a home range for animals in our study area. Further, diurnal habitat selection patterns may depend on thermoregulatory needs. There was no relationship between resource selection or home range size and body size, suggesting size-dependent dominance hierarchies may not have influenced alligator resource selection or space use in our sample. Though apparent habitat suitability and low alligator density did not manifest in an observed dominance hierarchy, we hypothesize that a change in either could increase intraspecific interactions, facilitating a dominance hierarchy. Due to the broad and diverse ecological roles of alligators, understanding the factors that influence their

  11. Scale-dependent habitat selection and size-based dominance in adult male American alligators

    Science.gov (United States)

    Strickland, Bradley A.; Vilella, Francisco; Belant, Jerrold L.

    2016-01-01

    Habitat selection is an active behavioral process that may vary across spatial and temporal scales. Animals choose an area of primary utilization (i.e., home range) then make decisions focused on resource needs within patches. Dominance may affect the spatial distribution of conspecifics and concomitant habitat selection. Size-dependent social dominance hierarchies have been documented in captive alligators, but evidence is lacking from wild populations. We studied habitat selection for adult male American alligators (Alligator mississippiensis; n = 17) on the Pearl River in central Mississippi, USA, to test whether habitat selection was scale-dependent and individual resource selectivity was a function of conspecific body size. We used K-select analysis to quantify selection at the home range scale and patches within the home range to determine selection congruency and important habitat variables. In addition, we used linear models to determine if body size was related to selection patterns and strengths. Our results indicated habitat selection of adult male alligators was a scale-dependent process. Alligators demonstrated greater overall selection for habitat variables at the patch level and less at the home range level, suggesting resources may not be limited when selecting a home range for animals in our study area. Further, diurnal habitat selection patterns may depend on thermoregulatory needs. There was no relationship between resource selection or home range size and body size, suggesting size-dependent dominance hierarchies may not have influenced alligator resource selection or space use in our sample. Though apparent habitat suitability and low alligator density did not manifest in an observed dominance hierarchy, we hypothesize that a change in either could increase intraspecific interactions, facilitating a dominance hierarchy. Due to the broad and diverse ecological roles of alligators, understanding the factors that influence their social dominance

  12. FR-type radio sources in COSMOS: relation of radio structure to size, accretion modes and large-scale environment

    Science.gov (United States)

    Vardoulaki, Eleni; Faustino Jimenez Andrade, Eric; Delvecchio, Ivan; Karim, Alexander; Smolčić, Vernesa; Magnelli, Benjamin; Bertoldi, Frank; Schinnener, Eva; Sargent, Mark; Finoguenov, Alexis; VLA COSMOS Team

    2018-01-01

    The radio sources associated with active galactic nuclei (AGN) can exhibit a variety of radio structures, from simple to more complex, giving rise to a variety of classification schemes. The question which still remains open, given deeper surveys revealing new populations of radio sources, is whether this plethora of radio structures can be attributed to the physical properties of the host or to the environment. Here we present an analysis on the radio structure of radio-selected AGN from the VLA-COSMOS Large Project at 3 GHz (JVLA-COSMOS; Smolčić et al.) in relation to: 1) their linear projected size, 2) the Eddington ratio, and 3) the environment their hosts lie within. We classify these as FRI (jet-like) and FRII (lobe-like) based on the FR-type classification scheme, and compare them to a sample of jet-less radio AGN in JVLA-COSMOS. We measure their linear projected sizes using a semi-automatic machine learning technique. Their Eddington ratios are calculated from X-ray data available for COSMOS. As environmental probes we take the X-ray groups (hundreds kpc) and the density fields (~Mpc-scale) in COSMOS. We find that FRII radio sources are on average larger than FRIs, which agrees with literature. But contrary to past studies, we find no dichotomy in FR objects in JVLA-COSMOS given their Eddington ratios, as on average they exhibit similar values. Furthermore our results show that the large-scale environment does not explain the observed dichotomy in lobe- and jet-like FR-type objects as both types are found on similar environments, but it does affect the shape of the radio structure introducing bents for objects closer to the centre of an X-ray group.

  13. Empirical evidence for multi-scaled controls on wildfire size distributions in California

    Science.gov (United States)

    Povak, N.; Hessburg, P. F., Sr.; Salter, R. B.

    2014-12-01

    Ecological theory asserts that regional wildfire size distributions are examples of self-organized critical (SOC) systems. Controls on SOC event-size distributions by virtue are purely endogenous to the system and include the (1) frequency and pattern of ignitions, (2) distribution and size of prior fires, and (3) lagged successional patterns after fires. However, recent work has shown that the largest wildfires often result from extreme climatic events, and that patterns of vegetation and topography may help constrain local fire spread, calling into question the SOC model's simplicity. Using an atlas of >12,000 California wildfires (1950-2012) and maximum likelihood estimation (MLE), we fit four different power-law models and broken-stick regressions to fire-size distributions across 16 Bailey's ecoregions. Comparisons among empirical fire size distributions across ecoregions indicated that most ecoregion's fire-size distributions were significantly different, suggesting that broad-scale top-down controls differed among ecoregions. One-parameter power-law models consistently fit a middle range of fire sizes (~100 to 10000 ha) across most ecoregions, but did not fit to larger and smaller fire sizes. We fit the same four power-law models to patch size distributions of aspect, slope, and curvature topographies and found that the power-law models fit to a similar middle range of topography patch sizes. These results suggested that empirical evidence may exist for topographic controls on fire sizes. To test this, we used neutral landscape modeling techniques to determine if observed fire edges corresponded with aspect breaks more often than expected by random. We found significant differences between the empirical and neutral models for some ecoregions, particularly within the middle range of fire sizes. Our results, combined with other recent work, suggest that controls on ecoregional fire size distributions are multi-scaled and likely are not purely SOC. California

  14. Top-spray fluid bed coating: Scale-up in terms of relative droplet size and drying force

    DEFF Research Database (Denmark)

    Hede, Peter Dybdahl; Bach, P.; Jensen, Anker Degn

    2008-01-01

    in terms of particle size fractions larger than 425 mu m determined by sieve analysis. Results indicated that the particle size distribution may be reproduced across scale with statistical valid precision by keeping the drying force and the relative droplet size constant across scale. It is also shown...

  15. Coupling machine learning with mechanistic models to study runoff production and river flow at the hillslope scale

    Science.gov (United States)

    Marçais, J.; Gupta, H. V.; De Dreuzy, J. R.; Troch, P. A. A.

    2016-12-01

    Geomorphological structure and geological heterogeneity of hillslopes are major controls on runoff responses. The diversity of hillslopes (morphological shapes and geological structures) on one hand, and the highly non linear runoff mechanism response on the other hand, make it difficult to transpose what has been learnt at one specific hillslope to another. Therefore, making reliable predictions on runoff appearance or river flow for a given hillslope is a challenge. Applying a classic model calibration (based on inverse problems technique) requires doing it for each specific hillslope and having some data available for calibration. When applied to thousands of cases it cannot always be promoted. Here we propose a novel modeling framework based on coupling process based models with data based approach. First we develop a mechanistic model, based on hillslope storage Boussinesq equations (Troch et al. 2003), able to model non linear runoff responses to rainfall at the hillslope scale. Second we set up a model database, representing thousands of non calibrated simulations. These simulations investigate different hillslope shapes (real ones obtained by analyzing 5m digital elevation model of Brittany and synthetic ones), different hillslope geological structures (i.e. different parametrizations) and different hydrologic forcing terms (i.e. different infiltration chronicles). Then, we use this model library to train a machine learning model on this physically based database. Machine learning model performance is then assessed by a classic validating phase (testing it on new hillslopes and comparing machine learning with mechanistic outputs). Finally we use this machine learning model to learn what are the hillslope properties controlling runoffs. This methodology will be further tested combining synthetic datasets with real ones.

  16. A methodology to investigate size scale effects in crystalline plasticity using uniaxial compression testing

    International Nuclear Information System (INIS)

    Uchic, Michael D.; Dimiduk, Dennis M.

    2005-01-01

    A methodology for performing uniaxial compression tests on samples having micron-size dimensions is presented. Sample fabrication is accomplished using focused ion beam milling to create cylindrical samples of uniform cross-section that remain attached to the bulk substrate at one end. Once fabricated, samples are tested in uniaxial compression using a nanoindentation device outfitted with a flat tip, and a stress-strain curve is obtained. The methodology can be used to examine the plastic response of samples of different sizes that are from the same bulk material. In this manner, dimensional size effects at the micron scale can be explored for single crystals, using a readily interpretable test that minimizes imposed stretch and bending gradients. The methodology was applied to a single-crystal Ni superalloy and a transition from bulk-like to size-affected behavior was observed for samples 5 μm in diameter and smaller

  17. Power Scaling of Petroleum Field Sizes and Movie Box Office Earnings.

    Science.gov (United States)

    Haley, J. A.; Barton, C. C.

    2017-12-01

    The size-cumulative frequency distribution of petroleum fields has long been shown to be power scaling, Mandelbrot, 1963, and Barton and Scholz, 1995. The scaling exponents for petroleum field volumes range from 0.8 to 1.08 worldwide and are used to assess the size and number of undiscovered fields. The size-cumulative frequency distribution of movie box office earnings also exhibits a power scaling distribution for domestic, overseas, and worldwide gross box office earnings for the top 668 earning movies released between 1939 and 2016 (http://www.boxofficemojo.com/alltime/). Box office earnings were reported in the dollars-of-the-day and were converted to 2015 U.S. dollars using the U.S. consumer price index (CPI) for domestic and overseas earnings. Because overseas earnings are not reported by country and there is no single inflation index appropriate for all overseas countries. Adjusting the box office earnings using the CPI index has two effects on the power functions fit. The first is that the scaling exponent has a narrow range (2.3 - 2.5) between the three data sets; and second, the scatter of the data points fit by the power function is reduced. The scaling exponents for the adjusted value are; 2.3 for domestic box office earnings, 2.5 for overseas box office earnings, and 2.5 worldwide box office earnings. The smaller the scaling exponent the greater the proportion of all earnings is contributed by a smaller proportion of all the movies: where E = P (a-2)/(a-1) where E is the percentage of earnings, P is the percentage of all movies in the data set. The scaling exponents for box office earnings (2.3 - 2.5) means that approximately 20% of the top earning movies contribute 70-55% of all the earnings for domestic, worldwide earnings respectively.

  18. The influence of the negative-positive ratio and screening database size on the performance of machine learning-based virtual screening.

    Science.gov (United States)

    Kurczab, Rafał; Bojarski, Andrzej J

    2017-01-01

    The machine learning-based virtual screening of molecular databases is a commonly used approach to identify hits. However, many aspects associated with training predictive models can influence the final performance and, consequently, the number of hits found. Thus, we performed a systematic study of the simultaneous influence of the proportion of negatives to positives in the testing set, the size of screening databases and the type of molecular representations on the effectiveness of classification. The results obtained for eight protein targets, five machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest), two types of molecular fingerprints (MACCS and CDK FP) and eight screening databases with different numbers of molecules confirmed our previous findings that increases in the ratio of negative to positive training instances greatly influenced most of the investigated parameters of the ML methods in simulated virtual screening experiments. However, the performance of screening was shown to also be highly dependent on the molecular library dimension. Generally, with the increasing size of the screened database, the optimal training ratio also increased, and this ratio can be rationalized using the proposed cost-effectiveness threshold approach. To increase the performance of machine learning-based virtual screening, the training set should be constructed in a way that considers the size of the screening database.

  19. Settlement-Size Scaling among Prehistoric Hunter-Gatherer Settlement Systems in the New World.

    Directory of Open Access Journals (Sweden)

    W Randall Haas

    Full Text Available Settlement size predicts extreme variation in the rates and magnitudes of many social and ecological processes in human societies. Yet, the factors that drive human settlement-size variation remain poorly understood. Size variation among economically integrated settlements tends to be heavy tailed such that the smallest settlements are extremely common and the largest settlements extremely large and rare. The upper tail of this size distribution is often formalized mathematically as a power-law function. Explanations for this scaling structure in human settlement systems tend to emphasize complex socioeconomic processes including agriculture, manufacturing, and warfare-behaviors that tend to differentially nucleate and disperse populations hierarchically among settlements. But, the degree to which heavy-tailed settlement-size variation requires such complex behaviors remains unclear. By examining the settlement patterns of eight prehistoric New World hunter-gatherer settlement systems spanning three distinct environmental contexts, this analysis explores the degree to which heavy-tailed settlement-size scaling depends on the aforementioned socioeconomic complexities. Surprisingly, the analysis finds that power-law models offer plausible and parsimonious statistical descriptions of prehistoric hunter-gatherer settlement-size variation. This finding reveals that incipient forms of hierarchical settlement structure may have preceded socioeconomic complexity in human societies and points to a need for additional research to explicate how mobile foragers came to exhibit settlement patterns that are more commonly associated with hierarchical organization. We propose that hunter-gatherer mobility with preferential attachment to previously occupied locations may account for the observed structure in site-size variation.

  20. Finite size scaling of the Higgs-Yukawa model near the Gaussian fixed point

    Energy Technology Data Exchange (ETDEWEB)

    Chu, David Y.J.; Lin, C.J. David [National Chiao-Tung Univ., Hsinchu, Taiwan (China); Jansen, Karl [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Knippschild, Bastian [HISKP, Bonn (Germany); Nagy, Attila [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humboldt-Univ. Berlin (Germany)

    2016-12-15

    We study the scaling properties of Higgs-Yukawa models. Using the technique of Finite-Size Scaling, we are able to derive scaling functions that describe the observables of the model in the vicinity of a Gaussian fixed point. A feasibility study of our strategy is performed for the pure scalar theory in the weak-coupling regime. Choosing the on-shell renormalisation scheme gives us an advantage to fit the scaling functions against lattice data with only a small number of fit parameters. These formulae can be used to determine the universality of the observed phase transitions, and thus play an essential role in future investigations of Higgs-Yukawa models, in particular in the strong Yukawa coupling region.

  1. Sizing models and performance analysis of volumetric expansion machines for waste heat recovery through organic Rankine cycles on passenger cars

    OpenAIRE

    Guillaume, Ludovic; Legros, Arnaud; Quoilin, Sylvain; Declaye, Sébastien; Lemort, Vincent

    2013-01-01

    This paper aims at helping designers of waste heat recovery organic (or non-organic) Rankine cycles on internal combustion engines to best select the expander among the piston, scroll and screw machines, and the working fluids among R245fa, ethanol and water. The first part of the paper presents the technical constraints inherent to each machine through a state of the art of the three technologies. The second part of the paper deals with the modeling of such expanders. Finally, in the last pa...

  2. A decomposition heuristics based on multi-bottleneck machines for large-scale job shop scheduling problems

    Directory of Open Access Journals (Sweden)

    Yingni Zhai

    2014-10-01

    Full Text Available Purpose: A decomposition heuristics based on multi-bottleneck machines for large-scale job shop scheduling problems (JSP is proposed.Design/methodology/approach: In the algorithm, a number of sub-problems are constructed by iteratively decomposing the large-scale JSP according to the process route of each job. And then the solution of the large-scale JSP can be obtained by iteratively solving the sub-problems. In order to improve the sub-problems' solving efficiency and the solution quality, a detection method for multi-bottleneck machines based on critical path is proposed. Therewith the unscheduled operations can be decomposed into bottleneck operations and non-bottleneck operations. According to the principle of “Bottleneck leads the performance of the whole manufacturing system” in TOC (Theory Of Constraints, the bottleneck operations are scheduled by genetic algorithm for high solution quality, and the non-bottleneck operations are scheduled by dispatching rules for the improvement of the solving efficiency.Findings: In the process of the sub-problems' construction, partial operations in the previous scheduled sub-problem are divided into the successive sub-problem for re-optimization. This strategy can improve the solution quality of the algorithm. In the process of solving the sub-problems, the strategy that evaluating the chromosome's fitness by predicting the global scheduling objective value can improve the solution quality.Research limitations/implications: In this research, there are some assumptions which reduce the complexity of the large-scale scheduling problem. They are as follows: The processing route of each job is predetermined, and the processing time of each operation is fixed. There is no machine breakdown, and no preemption of the operations is allowed. The assumptions should be considered if the algorithm is used in the actual job shop.Originality/value: The research provides an efficient scheduling method for the

  3. The square lattice Ising model on the rectangle II: finite-size scaling limit

    Science.gov (United States)

    Hucht, Alfred

    2017-06-01

    Based on the results published recently (Hucht 2017 J. Phys. A: Math. Theor. 50 065201), the universal finite-size contributions to the free energy of the square lattice Ising model on the L× M rectangle, with open boundary conditions in both directions, are calculated exactly in the finite-size scaling limit L, M\\to∞ , T\\to Tc , with fixed temperature scaling variable x\\propto(T/Tc-1)M and fixed aspect ratio ρ\\propto L/M . We derive exponentially fast converging series for the related Casimir potential and Casimir force scaling functions. At the critical point T=Tc we confirm predictions from conformal field theory (Cardy and Peschel 1988 Nucl. Phys. B 300 377, Kleban and Vassileva 1991 J. Phys. A: Math. Gen. 24 3407). The presence of corners and the related corner free energy has dramatic impact on the Casimir scaling functions and leads to a logarithmic divergence of the Casimir potential scaling function at criticality.

  4. Turbulent Concentration of MM-Size Particles in the Protoplanetary Nebula: Scaled-Dependent Multiplier Functions

    Science.gov (United States)

    Cuzzi, Jeffrey N.; Hartlep, Thomas; Weston, B.; Estremera, Shariff Kareem

    2014-01-01

    The initial accretion of primitive bodies (asteroids and TNOs) from freely-floating nebula particles remains problematic. Here we focus on the asteroids where constituent particle (read "chondrule") sizes are observationally known; similar arguments will hold for TNOs, but the constituent particles in those regions will be smaller, or will be fluffy aggregates, and are unobserved. Traditional growth-bysticking models encounter a formidable "meter-size barrier" [1] (or even a mm-cm-size barrier [2]) in turbulent nebulae, while nonturbulent nebulae form large asteroids too quickly to explain long spreads in formation times, or the dearth of melted asteroids [3]. Even if growth by sticking could somehow breach the meter size barrier, other obstacles are encountered through the 1-10km size range [4]. Another clue regarding planetesimal formation is an apparent 100km diameter peak in the pre-depletion, pre-erosion mass distribution of asteroids [5]; scenarios leading directly from independent nebula particulates to this size, which avoid the problematic m-km size range, could be called "leapfrog" scenarios [6-8]. The leapfrog scenario we have studied in detail involves formation of dense clumps of aerodynamically selected, typically mm-size particles in turbulence, which can under certain conditions shrink inexorably on 100-1000 orbit timescales and form 10-100km diameter sandpile planetesimals. The typical sizes of planetesimals and the rate of their formation [7,8] are determined by a statistical model with properties inferred from large numerical simulations of turbulence [9]. Nebula turbulence can be described by its Reynolds number Re = L/eta sup(4/3), where L = ETA alpha sup (1/2) the largest eddy scale, H is the nebula gas vertical scale height, and a the nebula turbulent viscosity parameter, and ? is the Kolmogorov or smallest scale in turbulence (typically about 1km), with eddy turnover time t?. In the nebula, Re is far larger than any numerical simulation can

  5. Turbulent Concentration of mm-Size Particles in the Protoplanetary Nebula: Scale-Dependent Cascades

    Science.gov (United States)

    Cuzzi, J. N.; Hartlep, T.

    2015-01-01

    The initial accretion of primitive bodies (here, asteroids in particular) from freely-floating nebula particles remains problematic. Traditional growth-by-sticking models encounter a formidable "meter-size barrier" (or even a mm-to-cm-size barrier) in turbulent nebulae, making the preconditions for so-called "streaming instabilities" difficult to achieve even for so-called "lucky" particles. Even if growth by sticking could somehow breach the meter size barrier, turbulent nebulae present further obstacles through the 1-10km size range. On the other hand, nonturbulent nebulae form large asteroids too quickly to explain long spreads in formation times, or the dearth of melted asteroids. Theoretical understanding of nebula turbulence is itself in flux; recent models of MRI (magnetically-driven) turbulence favor low-or- no-turbulence environments, but purely hydrodynamic turbulence is making a comeback, with two recently discovered mechanisms generating robust turbulence which do not rely on magnetic fields at all. An important clue regarding planetesimal formation is an apparent 100km diameter peak in the pre-depletion, pre-erosion mass distribution of asteroids; scenarios leading directly from independent nebula particulates to large objects of this size, which avoid the problematic m-km size range, could be called "leapfrog" scenarios. The leapfrog scenario we have studied in detail involves formation of dense clumps of aerodynamically selected, typically mm-size particles in turbulence, which can under certain conditions shrink inexorably on 100-1000 orbit timescales and form 10-100km diameter sandpile planetesimals. There is evidence that at least the ordinary chondrite parent bodies were initially composed entirely of a homogeneous mix of such particles. Thus, while they are arcane, turbulent concentration models acting directly on chondrule size particles are worthy of deeper study. The typical sizes of planetesimals and the rate of their formation can be

  6. Automated nodule location and size estimation using a multi-scale Laplacian of Gaussian filtering approach.

    Science.gov (United States)

    Jirapatnakul, Artit C; Fotin, Sergei V; Reeves, Anthony P; Biancardi, Alberto M; Yankelevitz, David F; Henschke, Claudia I

    2009-01-01

    Estimation of nodule location and size is an important pre-processing step in some nodule segmentation algorithms to determine the size and location of the region of interest. Ideally, such estimation methods will consistently find the same nodule location regardless of where the the seed point (provided either manually or by a nodule detection algorithm) is placed relative to the "true" center of the nodule, and the size should be a reasonable estimate of the true nodule size. We developed a method that estimates nodule location and size using multi-scale Laplacian of Gaussian (LoG) filtering. Nodule candidates near a given seed point are found by searching for blob-like regions with high filter response. The candidates are then pruned according to filter response and location, and the remaining candidates are sorted by size and the largest candidate selected. This method was compared to a previously published template-based method. The methods were evaluated on the basis of stability of the estimated nodule location to changes in the initial seed point and how well the size estimates agreed with volumes determined by a semi-automated nodule segmentation method. The LoG method exhibited better stability to changes in the seed point, with 93% of nodules having the same estimated location even when the seed point was altered, compared to only 52% of nodules for the template-based method. Both methods also showed good agreement with sizes determined by a nodule segmentation method, with an average relative size difference of 5% and -5% for the LoG and template-based methods respectively.

  7. Extreme value statistics and finite-size scaling at the ecological extinction/laminar-turbulence transition

    Science.gov (United States)

    Shih, Hong-Yan; Goldenfeld, Nigel

    Experiments on transitional turbulence in pipe flow seem to show that turbulence is a transient metastable state since the measured mean lifetime of turbulence puffs does not diverge asymptotically at a critical Reynolds number. Yet measurements reveal that the lifetime scales with Reynolds number in a super-exponential way reminiscent of extreme value statistics, and simulations and experiments in Couette and channel flow exhibit directed percolation type scaling phenomena near a well-defined transition. This universality class arises from the interplay between small-scale turbulence and a large-scale collective zonal flow, which exhibit predator-prey behavior. Why is asymptotically divergent behavior not observed? Using directed percolation and a stochastic individual level model of predator-prey dynamics related to transitional turbulence, we investigate the relation between extreme value statistics and power law critical behavior, and show that the paradox is resolved by carefully defining what is measured in the experiments. We theoretically derive the super-exponential scaling law, and using finite-size scaling, show how the same data can give both super-exponential behavior and power-law critical scaling.

  8. Effect of lateral size of graphene nano-sheets on the mechanical properties and machinability of alumina nano-composites

    Czech Academy of Sciences Publication Activity Database

    Porwal, H.; Saggar, Richa; Tatarko, P.; Grasso, S.; Saunders, T.; Dlouhý, Ivo; Reece, M. J.

    2016-01-01

    Roč. 42, č. 6 (2016), s. 7533-7542 ISSN 0272-8842 EU Projects: European Commission(XE) 264526 Institutional support: RVO:68081723 Keywords : Alumina * Graphene nano-sheets * Nano-composites * Mechanical properties * Machinability Subject RIV: JL - Materials Fatigue, Friction Mechanics Impact factor: 2.986, year: 2016

  9. Bending of marble with intrinsic length scales: a gradient theory with surface energy and size effects

    International Nuclear Information System (INIS)

    Vardoulakis, I.; Kourkoulis, S.K.; Exadaktylos, G.

    1998-01-01

    A gradient bending theory is developed based on a strain energy function that includes the classical Bernoulli-Euler term, the shape correction term (microstructural length scale) introduced by Timoshenko, and a term associated with surface energy (micromaterial length scale) accounting for the bending moment gradient effect. It is shown that the last term is capable to interpret the size effect in three-point bending (3PB), namely the decrease of the failure load with decreasing beam length for the same aspect ratio. This theory is used to describe the mechanical behaviour of Dionysos-Pentelikon marble in 3PB. Series of tests with prismatic marble beams of the same aperture but with different lengths were conducted and it was concluded that the present theory predicts well the size effect. (orig.)

  10. Steady-state numerical modeling of size effects in micron scale wire drawing

    DEFF Research Database (Denmark)

    Juul, Kristian Jørgensen; Nielsen, Kim Lau; Niordson, Christian Frithiof

    2017-01-01

    Wire drawing processes at the micron scale have received increased interest as micro wires are increasingly required in electrical components. It is well-established that size effects due to large strain gradient effects play an important role at this scale and the present study aims to quantify...... these effects for the wire drawing process. Focus will be on investigating the impact of size effects on the most favourable tool geometry (in terms of minimizing the drawing force) for various conditions between the wire/tool interface. The numerical analysis is based on a steady-state framework that enables...... convergence without dealing with the transient regime, but still fully accounts for the history dependence as-well as the elastic unloading. Thus, it forms the basis for a comprehensive parameter study. During the deformation process in wire drawing, large plastic strain gradients evolve in the contact region...

  11. Effects of the application of different particle sizes of mill scale (residue) in mass red ceramic

    International Nuclear Information System (INIS)

    Arnt, A.B.C.; Rocha, M.R.; Meller, J.G.

    2012-01-01

    This study aims to evaluate the influence of particle size of mill scale, residue, when added to a mass ceramic. This residue rich in iron oxide may be used as pigment in the ceramics industry. The use of pigments in ceramic products is related to the characteristics of non-toxicity, chemical stability and determination of tone. The tendency to solubilize the pigment depends on the specific surface area. The residue study was initially subjected to physical and chemical characterization and added in a proportion of 5% at a commercial ceramic white burning, with different particle sizes. Both formulations were sintered at a temperature of 950 ° C and evaluated for: loss on ignition, firing linear shrinkage, water absorption, flexural strength and difference of tone. Samples with finer particles of mill scale 0.038 μ showed higher mechanical strength values in the order of 18 MPa. (author)

  12. Overcoming time scale and finite size limitations to compute nucleation rates from small scale well tempered metadynamics simulations

    Science.gov (United States)

    Salvalaglio, Matteo; Tiwary, Pratyush; Maggioni, Giovanni Maria; Mazzotti, Marco; Parrinello, Michele

    2016-12-01

    Condensation of a liquid droplet from a supersaturated vapour phase is initiated by a prototypical nucleation event. As such it is challenging to compute its rate from atomistic molecular dynamics simulations. In fact at realistic supersaturation conditions condensation occurs on time scales that far exceed what can be reached with conventional molecular dynamics methods. Another known problem in this context is the distortion of the free energy profile associated to nucleation due to the small, finite size of typical simulation boxes. In this work the problem of time scale is addressed with a recently developed enhanced sampling method while contextually correcting for finite size effects. We demonstrate our approach by studying the condensation of argon, and showing that characteristic nucleation times of the order of magnitude of hours can be reliably calculated. Nucleation rates spanning a range of 10 orders of magnitude are computed at moderate supersaturation levels, thus bridging the gap between what standard molecular dynamics simulations can do and real physical systems.

  13. Finite-size scaling theory and quantum hamiltonian Field theory: the transverse Ising model

    International Nuclear Information System (INIS)

    Hamer, C.J.; Barber, M.N.

    1979-01-01

    Exact results for the mass gap, specific heat and susceptibility of the one-dimensional transverse Ising model on a finite lattice are generated by constructing a finite matrix representation of the Hamiltonian using strong-coupling eigenstates. The critical behaviour of the limiting infinite chain is analysed using finite-size scaling theory. In this way, excellent estimates (to within 1/2% accuracy) are found for the critical coupling and the exponents α, ν and γ

  14. Scaling of heavy ion beam probes for reactor-size devices

    International Nuclear Information System (INIS)

    Hickok, R.L.; Jennings, W.C.; Connor, K.A.; Schoch, P.M.

    1984-01-01

    Heavy ion beam probes for reactor-size plasma devices will require beam energies of approximately 10 MeV. Although accelerator technology appears to be available, beam deflection systems and parallel plate energy analyzers present severe difficulties if existing technology is scaled in a straightforward manner. We propose a different operating mode which will use a fixed beam trajectory and multiple cylindrical energy analyzers. Development effort will still be necessary, but we believe the basic technology is available

  15. Characteristic length scale of input data in distributed models: implications for modeling grid size

    Science.gov (United States)

    Artan, G. A.; Neale, C. M. U.; Tarboton, D. G.

    2000-01-01

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.

  16. Characteristic length scale of input data in distributed models: implications for modeling grain size

    Science.gov (United States)

    Artan, Guleid A.; Neale, C. M. U.; Tarboton, D. G.

    2000-01-01

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.

  17. Synchronization in scale-free networks: The role of finite-size effects

    Science.gov (United States)

    Torres, D.; Di Muro, M. A.; La Rocca, C. E.; Braunstein, L. A.

    2015-06-01

    Synchronization problems in complex networks are very often studied by researchers due to their many applications to various fields such as neurobiology, e-commerce and completion of tasks. In particular, scale-free networks with degree distribution P(k)∼ k-λ , are widely used in research since they are ubiquitous in Nature and other real systems. In this paper we focus on the surface relaxation growth model in scale-free networks with 2.5< λ <3 , and study the scaling behavior of the fluctuations, in the steady state, with the system size N. We find a novel behavior of the fluctuations characterized by a crossover between two regimes at a value of N=N* that depends on λ: a logarithmic regime, found in previous research, and a constant regime. We propose a function that describes this crossover, which is in very good agreement with the simulations. We also find that, for a system size above N* , the fluctuations decrease with λ, which means that the synchronization of the system improves as λ increases. We explain this crossover analyzing the role of the network's heterogeneity produced by the system size N and the exponent of the degree distribution.

  18. Observations of the auroral width spectrum at kilometre-scale size

    Directory of Open Access Journals (Sweden)

    N. Partamies

    2010-03-01

    Full Text Available This study examines auroral colour camera data from the Canadian Dense Array Imaging SYstem (DAISY. The Dense Array consists of three imagers with different narrow (compared to all-sky view field-of-view optics. The main scientific motivation arises from an earlier study by Knudsen et al. (2001 who used All-Sky Imager (ASI combined with even earlier TV camera observations (Maggs and Davis, 1968 to suggest that there is a gap in the distribution of auroral arc widths at around 1 km. With DAISY observations we are able to show that the gap is an instrument artifact and due to limited spatial resolution and coverage of commonly used instrumentation, namely ASIs and TV cameras. If the auroral scale size spectrum is indeed continuous, the mechanisms forming these structures should be able to produce all of the different scale sizes. So far, such a single process has not been proposed in the literature and very few models are designed to interact with each other even though the range of their favourable conditions do overlap. All scale-sizes should be considered in the future studies of auroral forms and electron acceleration regions, both in observational and theoretical approaches.

  19. INCREASING RETURNS TO SCALE, DYNAMICS OF INDUSTRIAL STRUCTURE AND SIZE DISTRIBUTION OF FIRMS

    Institute of Scientific and Technical Information of China (English)

    Ying FAN; Menghui LI; Zengru DI

    2006-01-01

    A multi-agent model is presented to discuss the market dynamics and the size distribution of firms.The model emphasizes the effects of increasing returns to scale and gives the description of the born and death of adaptive producers. The evolution of market structure and its behavior under the technological shocks are investigated. Its dynamical results are in good agreement with some empirical "stylized facts" of industrial evolution. With the diversity of demand and adaptive growth strategies of firms, the firm size in the generalized model obeys the power-law distribution. Three factors mainly determine the competitive dynamics and the skewed size distributions of firms: 1. Self-reinforcing mechanism; 2. Adaptive firm growing strategies; 3. Demand diversity or widespread heterogeneity in the technological capabilities of firms.

  20. Genome-scale identification of Legionella pneumophila effectors using a machine learning approach.

    Directory of Open Access Journals (Sweden)

    David Burstein

    2009-07-01

    Full Text Available A large number of highly pathogenic bacteria utilize secretion systems to translocate effector proteins into host cells. Using these effectors, the bacteria subvert host cell processes during infection. Legionella pneumophila translocates effectors via the Icm/Dot type-IV secretion system and to date, approximately 100 effectors have been identified by various experimental and computational techniques. Effector identification is a critical first step towards the understanding of the pathogenesis system in L. pneumophila as well as in other bacterial pathogens. Here, we formulate the task of effector identification as a classification problem: each L. pneumophila open reading frame (ORF was classified as either effector or not. We computationally defined a set of features that best distinguish effectors from non-effectors. These features cover a wide range of characteristics including taxonomical dispersion, regulatory data, genomic organization, similarity to eukaryotic proteomes and more. Machine learning algorithms utilizing these features were then applied to classify all the ORFs within the L. pneumophila genome. Using this approach we were able to predict and experimentally validate 40 new effectors, reaching a success rate of above 90%. Increasing the number of validated effectors to around 140, we were able to gain novel insights into their characteristics. Effectors were found to have low G+C content, supporting the hypothesis that a large number of effectors originate via horizontal gene transfer, probably from their protozoan host. In addition, effectors were found to cluster in specific genomic regions. Finally, we were able to provide a novel description of the C-terminal translocation signal required for effector translocation by the Icm/Dot secretion system. To conclude, we have discovered 40 novel L. pneumophila effectors, predicted over a hundred additional highly probable effectors, and shown the applicability of machine

  1. From damselflies to pterosaurs: how burst and sustainable flight performance scale with size.

    Science.gov (United States)

    Marden, J H

    1994-04-01

    Recent empirical data for short-burst lift and power production of flying animals indicate that mass-specific lift and power output scale independently (lift) or slightly positively (power) with increasing size. These results contradict previous theory, as well as simple observation, which argues for degradation of flight performance with increasing size. Here, empirical measures of lift and power during short-burst exertion are combined with empirically based estimates of maximum muscle power output in order to predict how burst and sustainable performance scale with body size. The resulting model is used to estimate performance of the largest extant flying birds and insects, along with the largest flying animals known from fossils. These estimates indicate that burst flight performance capacities of even the largest extinct fliers (estimated mass 250 kg) would allow takeoff from the ground; however, limitations on sustainable power output should constrain capacity for continuous flight at body sizes exceeding 0.003-1.0 kg, depending on relative wing length and flight muscle mass.

  2. Fractal and multifractal approaches for the analysis of crack-size dependent scaling laws in fatigue

    Energy Technology Data Exchange (ETDEWEB)

    Paggi, Marco [Politecnico di Torino, Department of Structural Engineering and Geotechnics, Corso Duca degli Abruzzi 24, 10129 Torino (Italy)], E-mail: marco.paggi@polito.it; Carpinteri, Alberto [Politecnico di Torino, Department of Structural Engineering and Geotechnics, Corso Duca degli Abruzzi 24, 10129 Torino (Italy)

    2009-05-15

    The enhanced ability to detect and measure very short cracks, along with a great interest in applying fracture mechanics formulae to smaller and smaller crack sizes, has pointed out the so-called anomalous behavior of short cracks with respect to their longer counterparts. The crack-size dependencies of both the fatigue threshold and the Paris' constant C are only two notable examples of these anomalous scaling laws. In this framework, a unified theoretical model seems to be missing and the behavior of short cracks can still be considered as an open problem. In this paper, we propose a critical reexamination of the fractal models for the analysis of crack-size effects in fatigue. The limitations of each model are put into evidence and removed. At the end, a new generalized theory based on fractal geometry is proposed, which permits to consistently interpret the short crack-related anomalous scaling laws within a unified theoretical formulation. Finally, this approach is herein used to interpret relevant experimental data related to the crack-size dependence of the fatigue threshold in metals.

  3. Fractal and multifractal approaches for the analysis of crack-size dependent scaling laws in fatigue

    International Nuclear Information System (INIS)

    Paggi, Marco; Carpinteri, Alberto

    2009-01-01

    The enhanced ability to detect and measure very short cracks, along with a great interest in applying fracture mechanics formulae to smaller and smaller crack sizes, has pointed out the so-called anomalous behavior of short cracks with respect to their longer counterparts. The crack-size dependencies of both the fatigue threshold and the Paris' constant C are only two notable examples of these anomalous scaling laws. In this framework, a unified theoretical model seems to be missing and the behavior of short cracks can still be considered as an open problem. In this paper, we propose a critical reexamination of the fractal models for the analysis of crack-size effects in fatigue. The limitations of each model are put into evidence and removed. At the end, a new generalized theory based on fractal geometry is proposed, which permits to consistently interpret the short crack-related anomalous scaling laws within a unified theoretical formulation. Finally, this approach is herein used to interpret relevant experimental data related to the crack-size dependence of the fatigue threshold in metals.

  4. Machine technology: a survey

    International Nuclear Information System (INIS)

    Barbier, M.M.

    1981-01-01

    An attempt was made to find existing machines that have been upgraded and that could be used for large-scale decontamination operations outdoors. Such machines are in the building industry, the mining industry, and the road construction industry. The road construction industry has yielded the machines in this presentation. A review is given of operations that can be done with the machines available

  5. Scale effects between body size and limb design in quadrupedal mammals.

    Science.gov (United States)

    Kilbourne, Brandon M; Hoffman, Louwrens C

    2013-01-01

    Recently the metabolic cost of swinging the limbs has been found to be much greater than previously thought, raising the possibility that limb rotational inertia influences the energetics of locomotion. Larger mammals have a lower mass-specific cost of transport than smaller mammals. The scaling of the mass-specific cost of transport is partly explained by decreasing stride frequency with increasing body size; however, it is unknown if limb rotational inertia also influences the mass-specific cost of transport. Limb length and inertial properties--limb mass, center of mass (COM) position, moment of inertia, radius of gyration, and natural frequency--were measured in 44 species of terrestrial mammals, spanning eight taxonomic orders. Limb length increases disproportionately with body mass via positive allometry (length ∝ body mass(0.40)); the positive allometry of limb length may help explain the scaling of the metabolic cost of transport. When scaled against body mass, forelimb inertial properties, apart from mass, scale with positive allometry. Fore- and hindlimb mass scale according to geometric similarity (limb mass ∝ body mass(1.0)), as do the remaining hindlimb inertial properties. The positive allometry of limb length is largely the result of absolute differences in limb inertial properties between mammalian subgroups. Though likely detrimental to locomotor costs in large mammals, scale effects in limb inertial properties appear to be concomitant with scale effects in sensorimotor control and locomotor ability in terrestrial mammals. Across mammals, the forelimb's potential for angular acceleration scales according to geometric similarity, whereas the hindlimb's potential for angular acceleration scales with positive allometry.

  6. Size-dependent elastic/inelastic behavior of enamel over millimeter and nanometer length scales.

    Science.gov (United States)

    Ang, Siang Fung; Bortel, Emely L; Swain, Michael V; Klocke, Arndt; Schneider, Gerold A

    2010-03-01

    The microstructure of enamel like most biological tissues has a hierarchical structure which determines their mechanical behavior. However, current studies of the mechanical behavior of enamel lack a systematic investigation of these hierarchical length scales. In this study, we performed macroscopic uni-axial compression tests and the spherical indentation with different indenter radii to probe enamel's elastic/inelastic transition over four hierarchical length scales, namely: 'bulk enamel' (mm), 'multiple-rod' (10's microm), 'intra-rod' (100's nm with multiple crystallites) and finally 'single-crystallite' (10's nm with an area of approximately one hydroxyapatite crystallite). The enamel's elastic/inelastic transitions were observed at 0.4-17 GPa depending on the length scale and were compared with the values of synthetic hydroxyapatite crystallites. The elastic limit of a material is important as it provides insights into the deformability of the material before fracture. At the smallest investigated length scale (contact radius approximately 20 nm), elastic limit is followed by plastic deformation. At the largest investigated length scale (contact size approximately 2 mm), only elastic then micro-crack induced response was observed. A map of elastic/inelastic regions of enamel from millimeter to nanometer length scale is presented. Possible underlying mechanisms are also discussed. (c) 2009 Elsevier Ltd. All rights reserved.

  7. Dependence of exponents on text length versus finite-size scaling for word-frequency distributions

    Science.gov (United States)

    Corral, Álvaro; Font-Clos, Francesc

    2017-08-01

    Some authors have recently argued that a finite-size scaling law for the text-length dependence of word-frequency distributions cannot be conceptually valid. Here we give solid quantitative evidence for the validity of this scaling law, using both careful statistical tests and analytical arguments based on the generalized central-limit theorem applied to the moments of the distribution (and obtaining a novel derivation of Heaps' law as a by-product). We also find that the picture of word-frequency distributions with power-law exponents that decrease with text length [X. Yan and P. Minnhagen, Physica A 444, 828 (2016), 10.1016/j.physa.2015.10.082] does not stand with rigorous statistical analysis. Instead, we show that the distributions are perfectly described by power-law tails with stable exponents, whose values are close to 2, in agreement with the classical Zipf's law. Some misconceptions about scaling are also clarified.

  8. Economies of scale and optimal size of hospitals: Empirical results for Danish public hospitals

    DEFF Research Database (Denmark)

    Kristensen, Troels

    number of beds per hospital is estimated to be 275 beds per site. Sensitivity analysis to partial changes in model parameters yields a joint 95% confidence interval in the range 130 - 585 beds per site. Conclusions: The results indicate that it may be appropriate to consolidate the production of small...... the current configuration of Danish hospitals is subject to scale economies that may justify such plans and to estimate an optimal hospital size. Methods: We estimate cost functions using panel data on total costs, DRG-weighted casemix, and number : We estimate cost functions using panel data on total costs......, DRG-weighted casemix, and number of beds for three years from 2004-2006. A short-run cost function is used to derive estimates of long-run scale economies by applying the envelope condition. Results: We identify moderate to significant long-run economies of scale when applying two alternative We...

  9. Generic finite size scaling for discontinuous nonequilibrium phase transitions into absorbing states

    Science.gov (United States)

    de Oliveira, M. M.; da Luz, M. G. E.; Fiore, C. E.

    2015-12-01

    Based on quasistationary distribution ideas, a general finite size scaling theory is proposed for discontinuous nonequilibrium phase transitions into absorbing states. Analogously to the equilibrium case, we show that quantities such as response functions, cumulants, and equal area probability distributions all scale with the volume, thus allowing proper estimates for the thermodynamic limit. To illustrate these results, five very distinct lattice models displaying nonequilibrium transitions—to single and infinitely many absorbing states—are investigated. The innate difficulties in analyzing absorbing phase transitions are circumvented through quasistationary simulation methods. Our findings (allied to numerical studies in the literature) strongly point to a unifying discontinuous phase transition scaling behavior for equilibrium and this important class of nonequilibrium systems.

  10. In-situ particle sizing at millimeter scale from electrochemical noise: simulation and experiments

    International Nuclear Information System (INIS)

    Yakdi, N.; Huet, F.; Ngo, K.

    2015-01-01

    Over the last few years, particle sizing techniques in multiphase flows based on optical technologies emerged as standard tools but the main disadvantage of these techniques is their dependence on the visibility of the measurement volume and on the focal distance. Thus, it is important to promote alternative techniques for particle sizing, and, moreover, able to work in hostile environment. This paper presents a single-particle sizing technique at a millimeter scale based on the measurement of the variation of the electrolyte resistance (ER) due to the passage of an insulating sphere between two electrodes immerged in a conductive solution. A theoretical model was proposed to determine the influence of the electrode size, the interelectrode distance, the size and the position of the sphere, on the electrolyte resistance. Experimental variations of ER due to the passage of spheres and measured by using a home-made electronic device are also presented in this paper. The excellent agreement obtained between the theoretical and experimental results allows validation of both model and experimental measurements. In addition, the technique was shown to be able to perform accurate measurements of the velocity of a ball falling in a liquid.

  11. A macroevolutionary explanation for energy equivalence in the scaling of body size and population density.

    Science.gov (United States)

    Damuth, John

    2007-05-01

    Across a wide array of animal species, mean population densities decline with species body mass such that the rate of energy use of local populations is approximately independent of body size. This "energetic equivalence" is particularly evident when ecological population densities are plotted across several or more orders of magnitude in body mass and is supported by a considerable body of evidence. Nevertheless, interpretation of the data has remained controversial, largely because of the difficulty of explaining the origin and maintenance of such a size-abundance relationship in terms of purely ecological processes. Here I describe results of a simulation model suggesting that an extremely simple mechanism operating over evolutionary time can explain the major features of the empirical data. The model specifies only the size scaling of metabolism and a process where randomly chosen species evolve to take resource energy from other species. This process of energy exchange among particular species is distinct from a random walk of species abundances and creates a situation in which species populations using relatively low amounts of energy at any body size have an elevated extinction risk. Selective extinction of such species rapidly drives size-abundance allometry in faunas toward approximate energetic equivalence and maintains it there.

  12. Trends in size of tropical deforestation events signal increasing dominance of industrial-scale drivers

    Science.gov (United States)

    Austin, Kemen G.; González-Roglich, Mariano; Schaffer-Smith, Danica; Schwantes, Amanda M.; Swenson, Jennifer J.

    2017-05-01

    Deforestation continues across the tropics at alarming rates, with repercussions for ecosystem processes, carbon storage and long term sustainability. Taking advantage of recent fine-scale measurement of deforestation, this analysis aims to improve our understanding of the scale of deforestation drivers in the tropics. We examined trends in forest clearings of different sizes from 2000-2012 by country, region and development level. As tropical deforestation increased from approximately 6900 kha yr-1 in the first half of the study period, to >7900 kha yr-1 in the second half of the study period, >50% of this increase was attributable to the proliferation of medium and large clearings (>10 ha). This trend was most pronounced in Southeast Asia and in South America. Outside of Brazil >60% of the observed increase in deforestation in South America was due to an upsurge in medium- and large-scale clearings; Brazil had a divergent trend of decreasing deforestation, >90% of which was attributable to a reduction in medium and large clearings. The emerging prominence of large-scale drivers of forest loss in many regions and countries suggests the growing need for policy interventions which target industrial-scale agricultural commodity producers. The experience in Brazil suggests that there are promising policy solutions to mitigate large-scale deforestation, but that these policy initiatives do not adequately address small-scale drivers. By providing up-to-date and spatially explicit information on the scale of deforestation, and the trends in these patterns over time, this study contributes valuable information for monitoring, and designing effective interventions to address deforestation.

  13. Scale size and life time of energy conversion regions observed by Cluster in the plasma sheet

    Directory of Open Access Journals (Sweden)

    M. Hamrin

    2009-11-01

    Full Text Available In this article, and in a companion paper by Hamrin et al. (2009 [Occurrence and location of concentrated load and generator regions observed by Cluster in the plasma sheet], we investigate localized energy conversion regions (ECRs in Earth's plasma sheet. From more than 80 Cluster plasma sheet crossings (660 h data at the altitude of about 15–20 RE in the summer and fall of 2001, we have identified 116 Concentrated Load Regions (CLRs and 35 Concentrated Generator Regions (CGRs. By examining variations in the power density, E·J, where E is the electric field and J is the current density obtained by Cluster, we have estimated typical values of the scale size and life time of the CLRs and the CGRs. We find that a majority of the observed ECRs are rather stationary in space, but varying in time. Assuming that the ECRs are cylindrically shaped and equal in size, we conclude that the typical scale size of the ECRs is 2 RE≲ΔSECR≲5 RE. The ECRs hence occupy a significant portion of the mid altitude plasma sheet. Moreover, the CLRs appear to be somewhat larger than the CGRs. The life time of the ECRs are of the order of 1–10 min, consistent with the large scale magnetotail MHD simulations of Birn and Hesse (2005. The life time of the CGRs is somewhat shorter than for the CLRs. On time scales of 1–10 min, we believe that ECRs rise and vanish in significant regions of the plasma sheet, possibly oscillating between load and generator character. It is probable that at least some of the observed ECRs oscillate energy back and forth in the plasma sheet instead of channeling it to the ionosphere.

  14. Machine learning for large-scale wearable sensor data in Parkinson's disease: Concepts, promises, pitfalls, and futures.

    Science.gov (United States)

    Kubota, Ken J; Chen, Jason A; Little, Max A

    2016-09-01

    For the treatment and monitoring of Parkinson's disease (PD) to be scientific, a key requirement is that measurement of disease stages and severity is quantitative, reliable, and repeatable. The last 50 years in PD research have been dominated by qualitative, subjective ratings obtained by human interpretation of the presentation of disease signs and symptoms at clinical visits. More recently, "wearable," sensor-based, quantitative, objective, and easy-to-use systems for quantifying PD signs for large numbers of participants over extended durations have been developed. This technology has the potential to significantly improve both clinical diagnosis and management in PD and the conduct of clinical studies. However, the large-scale, high-dimensional character of the data captured by these wearable sensors requires sophisticated signal processing and machine-learning algorithms to transform it into scientifically and clinically meaningful information. Such algorithms that "learn" from data have shown remarkable success in making accurate predictions for complex problems in which human skill has been required to date, but they are challenging to evaluate and apply without a basic understanding of the underlying logic on which they are based. This article contains a nontechnical tutorial review of relevant machine-learning algorithms, also describing their limitations and how these can be overcome. It discusses implications of this technology and a practical road map for realizing the full potential of this technology in PD research and practice. © 2016 International Parkinson and Movement Disorder Society. © 2016 International Parkinson and Movement Disorder Society.

  15. Gear fault diagnosis under variable conditions with intrinsic time-scale decomposition-singular value decomposition and support vector machine

    Energy Technology Data Exchange (ETDEWEB)

    Xing, Zhanqiang; Qu, Jianfeng; Chai, Yi; Tang, Qiu; Zhou, Yuming [Chongqing University, Chongqing (China)

    2017-02-15

    The gear vibration signal is nonlinear and non-stationary, gear fault diagnosis under variable conditions has always been unsatisfactory. To solve this problem, an intelligent fault diagnosis method based on Intrinsic time-scale decomposition (ITD)-Singular value decomposition (SVD) and Support vector machine (SVM) is proposed in this paper. The ITD method is adopted to decompose the vibration signal of gearbox into several Proper rotation components (PRCs). Subsequently, the singular value decomposition is proposed to obtain the singular value vectors of the proper rotation components and improve the robustness of feature extraction under variable conditions. Finally, the Support vector machine is applied to classify the fault type of gear. According to the experimental results, the performance of ITD-SVD exceeds those of the time-frequency analysis methods with EMD and WPT combined with SVD for feature extraction, and the classifier of SVM outperforms those for K-nearest neighbors (K-NN) and Back propagation (BP). Moreover, the proposed approach can accurately diagnose and identify different fault types of gear under variable conditions.

  16. The effect of intermediate stop and ball size in fabrication of recycled steel powder using ball milling from machining steel chips

    International Nuclear Information System (INIS)

    Fitri, M.W.M.; Shun, C.H.; Rizam, S.S.; Shamsul, J.B.

    2007-01-01

    A feasibility study for producing recycled steel powder from steel scrap by ball milling was carried out. Steel scrap from machining was used as a raw material and was milled using planetary ball milling. Three samples were prepared in order to study the effect of intermediate stop and ball size. Sample with intermediate stop during milling process showed finer particle size compared to the sample with continuous milling. Decrease in the temperature of the vial during the intermediate stop milling gives less ductile behaviour to the steel powder, which is then easily work-hardened and fragmented to fine powder. Mixed small and big size ball give the best production of recycled steel powder where it gives higher impact force to the scrap and accelerate the fragmentation of the steel scrap into powder. (author)

  17. Annotated bibliography on the impacts of size and scale of silvopasture in the Southeastern U.S.A

    Science.gov (United States)

    Gregory E. Frey; Marcus M. Comer

    2018-01-01

    Silvopasture, the integration of trees and pasture for livestock, has numerous potential benefits for producers. However, size or scale of the operation may affect those benefits. A review of relevant research on the scale and size economies of silvopasture, general forestry, and livestock agriculture was undertaken to better understand potential silvopasture...

  18. Size-selective sorting in bubble streaming flows: Particle migration on fast time scales

    Science.gov (United States)

    Thameem, Raqeeb; Rallabandi, Bhargav; Hilgenfeldt, Sascha

    2015-11-01

    Steady streaming from ultrasonically driven microbubbles is an increasingly popular technique in microfluidics because such devices are easily manufactured and generate powerful and highly controllable flows. Combining streaming and Poiseuille transport flows allows for passive size-sensitive sorting at particle sizes and selectivities much smaller than the bubble radius. The crucial particle deflection and separation takes place over very small times (milliseconds) and length scales (20-30 microns) and can be rationalized using a simplified geometric mechanism. A quantitative theoretical description is achieved through the application of recent results on three-dimensional streaming flow field contributions. To develop a more fundamental understanding of the particle dynamics, we use high-speed photography of trajectories in polydisperse particle suspensions, recording the particle motion on the time scale of the bubble oscillation. Our data reveal the dependence of particle displacement on driving phase, particle size, oscillatory flow speed, and streaming speed. With this information, the effective repulsive force exerted by the bubble on the particle can be quantified, showing for the first time how fast, selective particle migration is effected in a streaming flow. We acknowledge support by the National Science Foundation under grant number CBET-1236141.

  19. Parameter Scaling for Epidemic Size in a Spatial Epidemic Model with Mobile Individuals.

    Directory of Open Access Journals (Sweden)

    Chiyori T Urabe

    Full Text Available In recent years, serious infectious diseases tend to transcend national borders and widely spread in a global scale. The incidence and prevalence of epidemics are highly influenced not only by pathogen-dependent disease characteristics such as the force of infection, the latent period, and the infectious period, but also by human mobility and contact patterns. However, the effect of heterogeneous mobility of individuals on epidemic outcomes is not fully understood. Here, we aim to elucidate how spatial mobility of individuals contributes to the final epidemic size in a spatial susceptible-exposed-infectious-recovered (SEIR model with mobile individuals in a square lattice. After illustrating the interplay between the mobility parameters and the other parameters on the spatial epidemic spreading, we propose an index as a function of system parameters, which largely governs the final epidemic size. The main contribution of this study is to show that the proposed index is useful for estimating how parameter scaling affects the final epidemic size. To demonstrate the effectiveness of the proposed index, we show that there is a positive correlation between the proposed index computed with the real data of human airline travels and the actual number of positive incident cases of influenza B in the entire world, implying that the growing incidence of influenza B is attributed to increased human mobility.

  20. Finite-size scaling of clique percolation on two-dimensional Moore lattices

    Science.gov (United States)

    Dong, Jia-Qi; Shen, Zhou; Zhang, Yongwen; Huang, Zi-Gang; Huang, Liang; Chen, Xiaosong

    2018-05-01

    Clique percolation has attracted much attention due to its significance in understanding topological overlap among communities and dynamical instability of structured systems. Rich critical behavior has been observed in clique percolation on Erdős-Rényi (ER) random graphs, but few works have discussed clique percolation on finite dimensional systems. In this paper, we have defined a series of characteristic events, i.e., the historically largest size jumps of the clusters, in the percolating process of adding bonds and developed a new finite-size scaling scheme based on the interval of the characteristic events. Through the finite-size scaling analysis, we have found, interestingly, that, in contrast to the clique percolation on an ER graph where the critical exponents are parameter dependent, the two-dimensional (2D) clique percolation simply shares the same critical exponents with traditional site or bond percolation, independent of the clique percolation parameters. This has been corroborated by bridging two special types of clique percolation to site percolation on 2D lattices. Mechanisms for the difference of the critical behaviors between clique percolation on ER graphs and on 2D lattices are also discussed.

  1. Advancing the large-scale CCS database for metabolomics and lipidomics at the machine-learning era.

    Science.gov (United States)

    Zhou, Zhiwei; Tu, Jia; Zhu, Zheng-Jiang

    2018-02-01

    Metabolomics and lipidomics aim to comprehensively measure the dynamic changes of all metabolites and lipids that are present in biological systems. The use of ion mobility-mass spectrometry (IM-MS) for metabolomics and lipidomics has facilitated the separation and the identification of metabolites and lipids in complex biological samples. The collision cross-section (CCS) value derived from IM-MS is a valuable physiochemical property for the unambiguous identification of metabolites and lipids. However, CCS values obtained from experimental measurement and computational modeling are limited available, which significantly restricts the application of IM-MS. In this review, we will discuss the recently developed machine-learning based prediction approach, which could efficiently generate precise CCS databases in a large scale. We will also highlight the applications of CCS databases to support metabolomics and lipidomics. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Scaling HEP to Web size with RESTful protocols: The frontier example

    International Nuclear Information System (INIS)

    Dykstra, Dave

    2011-01-01

    The World-Wide-Web has scaled to an enormous size. The largest single contributor to its scalability is the HTTP protocol, particularly when used in conformity to REST (REpresentational State Transfer) principles. High Energy Physics (HEP) computing also has to scale to an enormous size, so it makes sense to base much of it on RESTful protocols. Frontier, which reads databases with an HTTP-based RESTful protocol, has successfully scaled to deliver production detector conditions data from both the CMS and ATLAS LHC detectors to hundreds of thousands of computer cores worldwide. Frontier is also able to re-use a large amount of standard software that runs the Web: on the clients, caches, and servers. I discuss the specific ways in which HTTP and REST enable high scalability for Frontier. I also briefly discuss another protocol used in HEP computing that is HTTP-based and RESTful, and another protocol that could benefit from it. My goal is to encourage HEP protocol designers to consider HTTP and REST whenever the same information is needed in many places.

  3. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning

    Directory of Open Access Journals (Sweden)

    Yang Liu

    2015-01-01

    Full Text Available Artificial neural networks (ANNs have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.

  4. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning.

    Science.gov (United States)

    Liu, Yang; Yang, Jie; Huang, Yuan; Xu, Lixiong; Li, Siguang; Qi, Man

    2015-01-01

    Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.

  5. Tracking Single Units in Chronic, Large Scale, Neural Recordings for Brain Machine Interface Applications

    Directory of Open Access Journals (Sweden)

    Ahmed eEleryan

    2014-07-01

    Full Text Available In the study of population coding in neurobiological systems, tracking unit identity may be critical to assess possible changes in the coding properties of neuronal constituents over prolonged periods of time. Ensuring unit stability is even more critical for reliable neural decoding of motor variables in intra-cortically controlled brain-machine interfaces (BMIs. Variability in intrinsic spike patterns, tuning characteristics, and single-unit identity over chronic use is a major challenge to maintaining this stability, requiring frequent daily calibration of neural decoders in BMI sessions by an experienced human operator. Here, we report on a unit-stability tracking algorithm that efficiently and autonomously identifies putative single-units that are stable across many sessions using a relatively short duration recording interval at the start of each session. The algorithm first builds a database of features extracted from units' average spike waveforms and firing patterns across many days of recording. It then uses these features to decide whether spike occurrences on the same channel on one day belong to the same unit recorded on another day or not. We assessed the overall performance of the algorithm for different choices of features and classifiers trained using human expert judgment, and quantified it as a function of accuracy and execution time. Overall, we found a trade-off between accuracy and execution time with increasing data volumes from chronically implanted rhesus macaques, with an average of 12 seconds processing time per channel at ~90% classification accuracy. Furthermore, 77% of the resulting putative single-units matched those tracked by human experts. These results demonstrate that over the span of a few months of recordings, automated unit tracking can be performed with high accuracy and used to streamline the calibration phase during BMI sessions.

  6. The maximum sizes of large scale structures in alternative theories of gravity

    Energy Technology Data Exchange (ETDEWEB)

    Bhattacharya, Sourav [IUCAA, Pune University Campus, Post Bag 4, Ganeshkhind, Pune, 411 007 India (India); Dialektopoulos, Konstantinos F. [Dipartimento di Fisica, Università di Napoli ' Federico II' , Complesso Universitario di Monte S. Angelo, Edificio G, Via Cinthia, Napoli, I-80126 Italy (Italy); Romano, Antonio Enea [Instituto de Física, Universidad de Antioquia, Calle 70 No. 52–21, Medellín (Colombia); Skordis, Constantinos [Department of Physics, University of Cyprus, 1 Panepistimiou Street, Nicosia, 2109 Cyprus (Cyprus); Tomaras, Theodore N., E-mail: sbhatta@iitrpr.ac.in, E-mail: kdialekt@gmail.com, E-mail: aer@phys.ntu.edu.tw, E-mail: skordis@ucy.ac.cy, E-mail: tomaras@physics.uoc.gr [Institute of Theoretical and Computational Physics and Department of Physics, University of Crete, 70013 Heraklion (Greece)

    2017-07-01

    The maximum size of a cosmic structure is given by the maximum turnaround radius—the scale where the attraction due to its mass is balanced by the repulsion due to dark energy. We derive generic formulae for the estimation of the maximum turnaround radius in any theory of gravity obeying the Einstein equivalence principle, in two situations: on a spherically symmetric spacetime and on a perturbed Friedman-Robertson-Walker spacetime. We show that the two formulae agree. As an application of our formula, we calculate the maximum turnaround radius in the case of the Brans-Dicke theory of gravity. We find that for this theory, such maximum sizes always lie above the ΛCDM value, by a factor 1 + 1/3ω, where ω>> 1 is the Brans-Dicke parameter, implying consistency of the theory with current data.

  7. Fabrication and Characterization of Polymeric Hollow Fiber Membranes with Nano-scale Pore Sizes

    International Nuclear Information System (INIS)

    Amir Mansourizadeh; Ahmad Fauzi Ismail

    2011-01-01

    Porous polyvinylidene fluoride (PVDF) and polysulfide (PSF) hollow fiber membranes were fabricated via a wet spinning method. The membranes were characterized in terms of gas permeability, wetting pressure, overall porosity and water contact angle. The morphology of the membranes was examined by FESEM. From gas permeation test, mean pore sizes of 7.3 and 9.6 nm were obtained for PSF and PVDF membrane, respectively. Using low polymer concentration in the dopes, the membranes demonstrated a relatively high overall porosity of 77 %. From FESEM examination, the PSF membrane presented a denser outer skin layer, which resulted in significantly lower N 2 permeance. Therefore, due to the high hydrophobicity and nano-scale pore sizes of the PVDF membrane, a good wetting pressure of 4.5x10 -5 Pa was achieved. (author)

  8. Nasonia Parasitic Wasps Escape from Haller's Rule by Diphasic, Partially Isometric Brain-Body Size Scaling and Selective Neuropil Adaptations

    NARCIS (Netherlands)

    Groothuis, Jitte; Smid, Hans M.

    2017-01-01

    Haller's rule states that brains scale allometrically with body size in all animals, meaning that relative brain size increases with decreasing body size. This rule applies both on inter- and intraspecific comparisons. Only 1 species, the extremely small parasitic wasp Trichogramma evanescens, is

  9. Many ways to be small: different environmental regulators of size generate distinct scaling relationships in Drosophila melanogaster

    OpenAIRE

    Shingleton, Alexander W.; Estep, Chad M.; Driscoll, Michael V.; Dworkin, Ian

    2009-01-01

    Static allometries, the scaling relationship between body and trait size, describe the shape of animals in a population or species, and are generated in response to variation in genetic or environmental regulators of size. In principle, allometries may vary with the different size regulators that generate them, which can be problematic since allometric differences are also used to infer patterns of selection on morphology. We test this hypothesis by examining the patterns of scaling in Drosop...

  10. The scaling of urban surface water abundance and impairment with city size

    Science.gov (United States)

    Steele, M. K.

    2018-03-01

    Urbanization alters surface water compared to nonurban landscapes, yet little is known regarding how basic aquatic ecosystem characteristics, such as the abundance and impairment of surface water, differ with population size or regional context. This study examined the abundance, scaling, and impairment of surface water by quantifying the stream length, water body area, and impaired stream length for 3520 cities in the United States with populations from 2500 to 18 million. Stream length, water body area, and impaired stream length were quantified using the National Hydrography Dataset and the EPA's 303(d) list. These metrics were scaled with population and city area using single and piecewise power-law models and related to biophysical factors (precipitation, topography) and land cover. Results show that abundance of stream length and water body area in cities actually increases with city area; however, the per person abundance decreases with population size. Relative to population, impaired stream length did not increase until city populations were > 25,000 people, then scaled linearly with population. Some variation in abundance and impairment was explained by biophysical context and land cover. Development intensity correlated with stream density and impairment; however, those relationships depended on the orientation of the land covers. When high intensity development occupied the local elevation highs (+ 15 m) and undeveloped land the elevation lows, the percentage of impaired streams was less than the opposite land cover orientation (- 15 m) or very flat land. These results show that surface water abundance and impairment across contiguous US cities are influenced by city size and by biophysical setting interacting with land cover intensity.

  11. Probabilistic finite-size transport models for fusion: Anomalous transport and scaling laws

    International Nuclear Information System (INIS)

    Milligen, B.Ph. van; Sanchez, R.; Carreras, B.A.

    2004-01-01

    Transport in fusion plasmas in the low confinement mode is characterized by several remarkable properties: the anomalous scaling of transport with system size, stiff (or 'canonical') profiles, power degradation, and rapid transport phenomena. The present article explores the possibilities of constructing a unified transport model, based on the continuous-time random walk, in which all these phenomena are handled adequately. The resulting formalism appears to be sufficiently general to provide a sound starting point for the development of a full-blown plasma transport code, capable of incorporating the relevant microscopic transport mechanisms, and allowing predictions of confinement properties

  12. Scaling of laser-plasma interactions with laser wavelength and plasma size

    International Nuclear Information System (INIS)

    Max, C.E.; Campbell, E.M.; Mead, W.C.; Kruer, W.L.; Phillion, D.W.; Turner, R.E.; Lasinski, B.F.; Estabrook, K.G.

    1983-01-01

    Plasma size is an important parameter in wavelength-scaling experiments because it determines both the threshold and potential gain for a variety of laser-plasma instabilities. Most experiments to date have of necessity produced relatively small plasmas, due to laser energy and pulse-length limitations. We have discussed in detail three recent Livermore experiments which had large enough plasmas that some instability thresholds were exceeded or approached. Our evidence for Raman scatter, filamentation, and the two-plasmon decay instability needs to be confirmed in experiments which measure several instability signatures simultaneously, and which produce more quantitative information about the local density and temperature profiles than we have today

  13. Scaling of laser-plasma interactions with laser wavelength and plasma size

    Energy Technology Data Exchange (ETDEWEB)

    Max, C.E.; Campbell, E.M.; Mead, W.C.; Kruer, W.L.; Phillion, D.W.; Turner, R.E.; Lasinski, B.F.; Estabrook, K.G.

    1983-01-25

    Plasma size is an important parameter in wavelength-scaling experiments because it determines both the threshold and potential gain for a variety of laser-plasma instabilities. Most experiments to date have of necessity produced relatively small plasmas, due to laser energy and pulse-length limitations. We have discussed in detail three recent Livermore experiments which had large enough plasmas that some instability thresholds were exceeded or approached. Our evidence for Raman scatter, filamentation, and the two-plasmon decay instability needs to be confirmed in experiments which measure several instability signatures simultaneously, and which produce more quantitative information about the local density and temperature profiles than we have today.

  14. Finite size scaling analysis on Nagel-Schreckenberg model for traffic flow

    Science.gov (United States)

    Balouchi, Ashkan; Browne, Dana

    2015-03-01

    The traffic flow problem as a many-particle non-equilibrium system has caught the interest of physicists for decades. Understanding the traffic flow properties and though obtaining the ability to control the transition from the free-flow phase to the jammed phase plays a critical role in the future world of urging self-driven cars technology. We have studied phase transitions in one-lane traffic flow through the mean velocity, distributions of car spacing, dynamic susceptibility and jam persistence -as candidates for an order parameter- using the Nagel-Schreckenberg model to simulate traffic flow. The length dependent transition has been observed for a range of maximum velocities greater than a certain value. Finite size scaling analysis indicates power-law scaling of these quantities at the onset of the jammed phase.

  15. Determining the size of a complete disturbance landscape: multi-scale, continental analysis of forest change.

    Science.gov (United States)

    Buma, Brian; Costanza, Jennifer K; Riitters, Kurt

    2017-11-21

    The scale of investigation for disturbance-influenced processes plays a critical role in theoretical assumptions about stability, variance, and equilibrium, as well as conservation reserve and long-term monitoring program design. Critical consideration of scale is required for robust planning designs, especially when anticipating future disturbances whose exact locations are unknown. This research quantified disturbance proportion and pattern (as contagion) at multiple scales across North America. This pattern of scale-associated variability can guide selection of study and management extents, for example, to minimize variance (measured as standard deviation) between any landscapes within an ecoregion. We identified the proportion and pattern of forest disturbance (30 m grain size) across multiple landscape extents up to 180 km 2 . We explored the variance in proportion of disturbed area and the pattern of that disturbance between landscapes (within an ecoregion) as a function of the landscape extent. In many ecoregions, variance between landscapes within an ecoregion was minimal at broad landscape extents (low standard deviation). Gap-dominated regions showed the least variance, while fire-dominated showed the largest. Intensively managed ecoregions displayed unique patterns. A majority of the ecoregions showed low variance between landscapes at some scale, indicating an appropriate extent for incorporating natural regimes and unknown future disturbances was identified. The quantification of the scales of disturbance at the ecoregion level provides guidance for individuals interested in anticipating future disturbances which will occur in unknown spatial locations. Information on the extents required to incorporate disturbance patterns into planning is crucial for that process.

  16. Anisotropic modulus stabilisation. Strings at LHC scales with micron-sized extra dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Cicoli, M. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Burgess, C.P. [McMaster Univ., Hamilton (Canada). Dept. of Physics and Astronomy; Perimeter Institute for Theoretical Physics, Waterloo (Canada); Quevedo, F. [Cambridge Univ. (United Kingdom). DAMTP/CMS; Abdus Salam International Centre for Theoretical Physics, Trieste (Italy)

    2011-04-15

    We construct flux-stabilised Type IIB string compactifications whose extra dimensions have very different sizes, and use these to describe several types of vacua with a TeV string scale. Because we can access regimes where two dimensions are hierarchically larger than the other four, we find examples where two dimensions are micron-sized while the other four are at the weak scale in addition to more standard examples with all six extra dimensions equally large. Besides providing ultraviolet completeness, the phenomenology of these models is richer than vanilla large-dimensional models in several generic ways: (i) they are supersymmetric, with supersymmetry broken at sub-eV scales in the bulk but only nonlinearly realised in the Standard Model sector, leading to no MSSM superpartners for ordinary particles and many more bulk missing-energy channels, as in supersymmetric large extra dimensions (SLED); (ii) small cycles in the more complicated extra-dimensional geometry allow some KK states to reside at TeV scales even if all six extra dimensions are nominally much larger; (iii) a rich spectrum of string and KK states at TeV scales; and (iv) an equally rich spectrum of very light moduli exist having unusually small (but technically natural) masses, with potentially interesting implications for cosmology and astrophysics that nonetheless evade new-force constraints. The hierarchy problem is solved in these models because the extra-dimensional volume is naturally stabilised at exponentially large values: the extra dimensions are Calabi-Yau geometries with a 4D K3-fibration over a 2D base, with moduli stabilised within the well-established LARGE-Volume scenario. The new technical step is the use of poly-instanton corrections to the superpotential (which, unlike for simpler models, are present on K3-fibered Calabi-Yau compactifications) to obtain a large hierarchy between the sizes of different dimensions. For several scenarios we identify the low-energy spectrum and

  17. Anisotropic modulus stabilisation. Strings at LHC scales with micron-sized extra dimensions

    International Nuclear Information System (INIS)

    Cicoli, M.; Burgess, C.P.; Quevedo, F.

    2011-04-01

    We construct flux-stabilised Type IIB string compactifications whose extra dimensions have very different sizes, and use these to describe several types of vacua with a TeV string scale. Because we can access regimes where two dimensions are hierarchically larger than the other four, we find examples where two dimensions are micron-sized while the other four are at the weak scale in addition to more standard examples with all six extra dimensions equally large. Besides providing ultraviolet completeness, the phenomenology of these models is richer than vanilla large-dimensional models in several generic ways: (i) they are supersymmetric, with supersymmetry broken at sub-eV scales in the bulk but only nonlinearly realised in the Standard Model sector, leading to no MSSM superpartners for ordinary particles and many more bulk missing-energy channels, as in supersymmetric large extra dimensions (SLED); (ii) small cycles in the more complicated extra-dimensional geometry allow some KK states to reside at TeV scales even if all six extra dimensions are nominally much larger; (iii) a rich spectrum of string and KK states at TeV scales; and (iv) an equally rich spectrum of very light moduli exist having unusually small (but technically natural) masses, with potentially interesting implications for cosmology and astrophysics that nonetheless evade new-force constraints. The hierarchy problem is solved in these models because the extra-dimensional volume is naturally stabilised at exponentially large values: the extra dimensions are Calabi-Yau geometries with a 4D K3-fibration over a 2D base, with moduli stabilised within the well-established LARGE-Volume scenario. The new technical step is the use of poly-instanton corrections to the superpotential (which, unlike for simpler models, are present on K3-fibered Calabi-Yau compactifications) to obtain a large hierarchy between the sizes of different dimensions. For several scenarios we identify the low-energy spectrum and

  18. Avalanching Systems with Longer Range Connectivity: Occurrence of a Crossover Phenomenon and Multifractal Finite Size Scaling

    Directory of Open Access Journals (Sweden)

    Simone Benella

    2017-07-01

    Full Text Available Many out-of-equilibrium systems respond to external driving with nonlinear and self-similar dynamics. This near scale-invariant behavior of relaxation events has been modeled through sand pile cellular automata. However, a common feature of these models is the assumption of a local connectivity, while in many real systems, we have evidence for longer range connectivity and a complex topology of the interacting structures. Here, we investigate the role that longer range connectivity might play in near scale-invariant systems, by analyzing the results of a sand pile cellular automaton model on a Newman–Watts network. The analysis clearly indicates the occurrence of a crossover phenomenon in the statistics of the relaxation events as a function of the percentage of longer range links and the breaking of the simple Finite Size Scaling (FSS. The more complex nature of the dynamics in the presence of long-range connectivity is investigated in terms of multi-scaling features and analyzed by the Rank-Ordered Multifractal Analysis (ROMA.

  19. Effects of chlorpyrifos on soil carboxylesterase activity at an aggregate-size scale.

    Science.gov (United States)

    Sanchez-Hernandez, Juan C; Sandoval, Marco

    2017-08-01

    The impact of pesticides on extracellular enzyme activity has been mostly studied on the bulk soil scale, and our understanding of the impact on an aggregate-size scale remains limited. Because microbial processes, and their extracellular enzyme production, are dependent on the size of soil aggregates, we hypothesized that the effect of pesticides on enzyme activities is aggregate-size specific. We performed three experiments using an Andisol to test the interaction between carboxylesterase (CbE) activity and the organophosphorus (OP) chlorpyrifos. First, we compared esterase activity among aggregates of different size spiked with chlorpyrifos (10mgkg -1 wet soil). Next, we examined the inhibition of CbE activity by chlorpyrifos and its metabolite chlorpyrifos-oxon in vitro to explore the aggregate size-dependent affinity of the pesticides for the active site of the enzyme. Lastly, we assessed the capability of CbEs to alleviate chlorpyrifos toxicity upon soil microorganisms. Our principal findings were: 1) CbE activity was significantly inhibited (30-67% of controls) in the microaggregates (1.0mm) compared with the corresponding controls (i.e., pesticide-free aggregates), 2) chlorpyrifos-oxon was a more potent CbE inhibitor than chlorpyrifos; however, no significant differences in the CbE inhibition were found between micro- and macroaggregates, and 3) dose-response relationships between CbE activity and chlorpyrifos concentrations revealed the capability of the enzyme to bind chlorpyrifos-oxon, which was dependent on the time of exposure. This chemical interaction resulted in a safeguarding mechanism against chlorpyrifos-oxon toxicity on soil microbial activity, as evidenced by the unchanged activity of dehydrogenase and related extracellular enzymes in the pesticide-treated aggregates. Taken together, these results suggest that environmental risk assessments of OP-polluted soils should consider the fractionation of soil in aggregates of different size to measure

  20. Multi-machine scaling of the main SOL parallel heat flux width in tokamak limiter plasmas

    Czech Academy of Sciences Publication Activity Database

    Horáček, Jan; Pitts, R.A.; Adámek, Jiří; Arnoux, G.; Bak, J.-G.; Brezinsek, S.; Dimitrova, Miglena; Goldston, R.J.; Gunn, J. P.; Havlíček, Josef; Hong, S.-H.; Janky, Filip; LaBombard, B.; Marsen, S.; Maddaluno, G.; Nie, L.; Pericoli, V.; Popov, Tsv.; Pánek, Radomír; Rudakov, D.; Seidl, Jakub; Seo, D.S.; Shimada, M.; Silva, C.; Stangeby, P.C.; Viola, B.; Vondráček, Petr; Wang, H.; Xu, G.S.; Xu, Y.

    2016-01-01

    Roč. 58, č. 7 (2016), č. článku 074005. ISSN 0741-3335 R&D Projects: GA ČR(CZ) GAP205/12/2327; GA ČR(CZ) GA15-10723S; GA MŠk(CZ) LM2011021 EU Projects: European Commission(XE) 633053 - EUROfusion Institutional support: RVO:61389021 Keywords : tokamak * ITER * SOL decay length * SOL width * scaling Subject RIV: BL - Plasma and Gas Discharge Physics OBOR OECD: Fluids and plasma physics (including surface physics) Impact factor: 2.392, year: 2016 http://iopscience.iop.org/article/10.1088/0741-3335/58/7/074005

  1. Theoretical explanation of present mirror experiments and linear stability of larger scaled machines

    International Nuclear Information System (INIS)

    Berk, H.L.; Baldwin, D.E.; Cutler, T.A.; Lodestro, L.L.; Maron, N.; Pearlstein, L.D.; Rognlien, T.D.; Stewart, J.J.; Watson, D.C.

    1976-01-01

    A quasilinear model for the evolution of the 2XIIB mirror experiment is presented and shown to reproduce the time evolution of the experiment. From quasilinear theory it follows that the energy lifetime is the Spitzer electron drag time for T/sub e/ approximately less than 0.1T/sub i/. By computing the stability boundary of the DCLC mode, with warm plasma stabilization, the electron temperature is predicted as a function of radial scale length. In addition, the effect of finite length corrections to the Alfven cyclotron mode is assessed

  2. Size effect studies on geometrically scaled three point bend type specimens with U-notches

    Energy Technology Data Exchange (ETDEWEB)

    Krompholz, K.; Kalkhof, D.; Groth, E

    2001-02-01

    One of the objectives of the REVISA project (REactor Vessel Integrity in Severe Accidents) is to assess size and scale effects in plastic flow and failure. This includes an experimental programme devoted to characterising the influence of specimen size, strain rate, and strain gradients at various temperatures. One of the materials selected was the forged reactor pressure vessel material 20 MnMoNi 55, material number 1.6310 (heat number 69906). Among others, a size effect study of the creep response of this material was performed, using geometrically similar smooth specimens with 5 mm and 20 mm diameter. The tests were done under constant load in an inert atmosphere at 700 {sup o}C, 800 {sup o}C, and 900 {sup o}C, close to and within the phase transformation regime. The mechanical stresses varied from 10 MPa to 30 MPa, depending on temperature. Prior to creep testing the temperature and time dependence of scale oxidation as well as the temperature regime of the phase transformation was determined. The creep tests were supplemented by metallographical investigations.The test results are presented in form of creep curves strain versus time from which characteristic creep data were determined as a function of the stress level at given temperatures. The characteristic data are the times to 5% and 15% strain and to rupture, the secondary (minimum) creep rate, the elongation at fracture within the gauge length, the type of fracture and the area reduction after fracture. From metallographical investigations the accent's phase contents at different temperatures could be estimated. From these data also the parameters of the regression calculation (e.g. Norton's creep law) were obtained. The evaluation revealed that the creep curves and characteristic data are size dependent of varying degree, depending on the stress and temperature level, but the size influence cannot be related to corrosion or orientation effects or to macroscopic heterogeneity (position effect

  3. Scaling up graph-based semisupervised learning via prototype vector machines.

    Science.gov (United States)

    Zhang, Kai; Lan, Liang; Kwok, James T; Vucetic, Slobodan; Parvin, Bahram

    2015-03-01

    When the amount of labeled data are limited, semisupervised learning can improve the learner's performance by also using the often easily available unlabeled data. In particular, a popular approach requires the learned function to be smooth on the underlying data manifold. By approximating this manifold as a weighted graph, such graph-based techniques can often achieve state-of-the-art performance. However, their high time and space complexities make them less attractive on large data sets. In this paper, we propose to scale up graph-based semisupervised learning using a set of sparse prototypes derived from the data. These prototypes serve as a small set of data representatives, which can be used to approximate the graph-based regularizer and to control model complexity. Consequently, both training and testing become much more efficient. Moreover, when the Gaussian kernel is used to define the graph affinity, a simple and principled method to select the prototypes can be obtained. Experiments on a number of real-world data sets demonstrate encouraging performance and scaling properties of the proposed approach. It also compares favorably with models learned via l1 -regularization at the same level of model sparsity. These results demonstrate the efficacy of the proposed approach in producing highly parsimonious and accurate models for semisupervised learning.

  4. Small Scale Yielding Correction of Constraint Loss in Small Sized Fracture Toughness Test Specimens

    International Nuclear Information System (INIS)

    Kim, Maan Won; Kim, Min Chul; Lee, Bong Sang; Hong, Jun Hwa

    2005-01-01

    Fracture toughness data in the ductile-brittle transition region of ferritic steels show scatter produced by local sampling effects and specimen geometry dependence which results from relaxation in crack tip constraint. The ASTM E1921 provides a standard test method to define the median toughness temperature curve, so called Master Curve, for the material corresponding to a 1T crack front length and also defines a reference temperature, T 0 , at which median toughness value is 100 MPam for a 1T size specimen. The ASTM E1921 procedures assume that high constraint, small scaling yielding (SSY) conditions prevail at fracture along the crack front. Violation of the SSY assumption occurs most often during tests of smaller specimens. Constraint loss in such cases leads to higher toughness values and thus lower T 0 values. When applied to a structure with low constraint geometry, the standard fracture toughness estimates may lead to strongly over-conservative estimates. A lot of efforts have been made to adjust the constraint effect. In this work, we applied a small-scale yielding correction (SSYC) to adjust the constraint loss of 1/3PCVN and PCVN specimens which are relatively smaller than 1T size specimen at the fracture toughness Master Curve test

  5. Economies of scale and firm size optimum in rural water supply

    Science.gov (United States)

    Sauer, Johannes

    2005-11-01

    This article is focused on modeling and analyzing the cost structure of water-supplying companies. A cross-sectional data set was collected with respect to water firms in rural areas of former East and West Germany. The empirical data are analyzed by applying a symmetric generalized McFadden (SGM) functional form. This flexible functional form allows for testing the concavity required by microeconomic theory as well as the global imposition of such curvature restrictions without any loss of flexibility. The original specification of the SGM cost function is modified to incorporate fixed factors of water production and supply as, for example, groundwater intake or the number of connections supplied. The estimated flexible and global curvature correct cost function is then used to derive scale elasticities as well as the optimal firm size. The results show that no water supplier in the sample produces at constant returns to scale. The optimal firm size was found to be on average about three times larger than the existing one. These findings deliver evidence for the hypothesis that the legally set supplying areas, oriented at public administrative criteria as well as local characteristics of water resources, are economically inefficient. Hence structural inefficiency in the rural water sector is confirmed to be policy induced.

  6. Contribution to the multi-machine pedestal scaling from COMPASS tokamak

    Czech Academy of Sciences Publication Activity Database

    Komm, Michael; Bílková, Petra; Aftanas, Milan; Berta, Miklós; Böhm, Petr; Bogár, Ondrej; Frassinetti, L.; Grover, Ondřej; Háček, Pavel; Havlíček, Josef; Hron, Martin; Imríšek, Martin; Krbec, Jaroslav; Mitošinková, Klára; Naydenkova, Diana; Pánek, Radomír; Peterka, Matěj; Snyder, P.B.; Stefanikova, E.; Stöckel, Jan; Šos, Miroslav; Urban, Jakub; Varju, Jozef; Vondráček, Petr; Weinzettl, Vladimír

    2017-01-01

    Roč. 57, č. 5 (2017), č. článku 056041. ISSN 0029-5515. [IAEA Fusion Energy Conference (FEC 2016)/26./. Kyoto, 17.10.2016-22.10.2016] R&D Projects: GA ČR(CZ) GA14-35260S; GA ČR(CZ) GA16-14228S; GA MŠk(CZ) 8D15001 EU Projects: European Commission(XE) 633053 - EUROfusion Institutional support: RVO:61389021 Keywords : COMPASS * H-mode * pedestal * scaling * tokamak * HRTS Subject RIV: BL - Plasma and Gas Discharge Physics OBOR OECD: Fluids and plasma physics (including surface physics) Impact factor: 3.307, year: 2016 https://doi.org/10.1088/1741-4326/aa6659

  7. To address surface reaction network complexity using scaling relations machine learning and DFT calculations

    International Nuclear Information System (INIS)

    Ulissi, Zachary W.; Medford, Andrew J.; Bligaard, Thomas; Nørskov, Jens K.

    2017-01-01

    Surface reaction networks involving hydrocarbons exhibit enormous complexity with thousands of species and reactions for all but the very simplest of chemistries. We present a framework for optimization under uncertainty for heterogeneous catalysis reaction networks using surrogate models that are trained on the fly. The surrogate model is constructed by teaching a Gaussian process adsorption energies based on group additivity fingerprints, combined with transition-state scaling relations and a simple classifier for determining the rate-limiting step. The surrogate model is iteratively used to predict the most important reaction step to be calculated explicitly with computationally demanding electronic structure theory. Applying these methods to the reaction of syngas on rhodium(111), we identify the most likely reaction mechanism. Lastly, propagating uncertainty throughout this process yields the likelihood that the final mechanism is complete given measurements on only a subset of the entire network and uncertainty in the underlying density functional theory calculations.

  8. Scale size and life time of energy conversion regions observed by Cluster in the plasma sheet

    Directory of Open Access Journals (Sweden)

    M. Hamrin

    2009-11-01

    Full Text Available In this article, and in a companion paper by Hamrin et al. (2009 [Occurrence and location of concentrated load and generator regions observed by Cluster in the plasma sheet], we investigate localized energy conversion regions (ECRs in Earth's plasma sheet. From more than 80 Cluster plasma sheet crossings (660 h data at the altitude of about 15–20 RE in the summer and fall of 2001, we have identified 116 Concentrated Load Regions (CLRs and 35 Concentrated Generator Regions (CGRs. By examining variations in the power density, E·J, where E is the electric field and J is the current density obtained by Cluster, we have estimated typical values of the scale size and life time of the CLRs and the CGRs. We find that a majority of the observed ECRs are rather stationary in space, but varying in time. Assuming that the ECRs are cylindrically shaped and equal in size, we conclude that the typical scale size of the ECRs is 2 RE≲ΔSECR≲5 RE. The ECRs hence occupy a significant portion of the mid altitude plasma sheet. Moreover, the CLRs appear to be somewhat larger than the CGRs. The life time of the ECRs are of the order of 1–10 min, consistent with the large scale magnetotail MHD simulations of Birn and Hesse (2005. The life time of the CGRs is somewhat shorter than for the CLRs. On time scales of 1–10 min, we believe that ECRs rise and vanish in significant regions of the plasma sheet, possibly oscillating between load and generator character. It is probable that at least some of the observed ECRs oscillate energy back and forth in the plasma sheet instead of channeling it to the ionosphere.

  9. Predicting the size and elevation of future mountain forests: Scaling macroclimate to microclimate

    Science.gov (United States)

    Cory, S. T.; Smith, W. K.

    2017-12-01

    Global climate change is predicted to alter continental scale macroclimate and regional mesoclimate. Yet, it is at the microclimate scale that organisms interact with their physiochemical environments. Thus, to predict future changes in the biota such as biodiversity and distribution patterns, a quantitative coupling between macro-, meso-, and microclimatic parameters must be developed. We are evaluating the impact of climate change on the size and elevational distribution of conifer mountain forests by determining the microclimate necessary for new seedling survival at the elevational boundaries of the forest. This initial life stage, only a few centimeters away from the soil surface, appears to be the bottleneck to treeline migration and the expansion or contraction of a conifer mountain forest. For example, survival at the alpine treeline is extremely rare and appears to be limited to facilitated microsites with low sky exposure. Yet, abundant mesoclimate data from standard weather stations have rarely been scaled to the microclimate level. Our research is focusing on an empirical downscaling approach linking microclimate measurements at favorable seedling microsites to the meso- and macro-climate levels. Specifically, mesoclimate values of air temperature, relative humidity, incident sunlight, and wind speed from NOAA NCEI weather stations can be extrapolated to the microsite level that is physiologically relevant for seedling survival. Data will be presented showing a strong correlation between incident sunlight measured at 2-m and seedling microclimate, despite large differences from seedling/microsite temperatures. Our downscaling approach will ultimately enable predictions of microclimate from the much more abundant mesoclimate data available from a variety of sources. Thus, scaling from macro- to meso- to microclimate will be possible, enabling predictions of climate change models to be translated to the microsite level. This linkage between measurement

  10. Large Scale Behavior and Droplet Size Distributions in Crude Oil Jets and Plumes

    Science.gov (United States)

    Katz, Joseph; Murphy, David; Morra, David

    2013-11-01

    The 2010 Deepwater Horizon blowout introduced several million barrels of crude oil into the Gulf of Mexico. Injected initially as a turbulent jet containing crude oil and gas, the spill caused formation of a subsurface plume stretching for tens of miles. The behavior of such buoyant multiphase plumes depends on several factors, such as the oil droplet and bubble size distributions, current speed, and ambient stratification. While large droplets quickly rise to the surface, fine ones together with entrained seawater form intrusion layers. Many elements of the physics of droplet formation by an immiscible turbulent jet and their resulting size distribution have not been elucidated, but are known to be significantly influenced by the addition of dispersants, which vary the Weber Number by orders of magnitude. We present experimental high speed visualizations of turbulent jets of sweet petroleum crude oil (MC 252) premixed with Corexit 9500A dispersant at various dispersant to oil ratios. Observations were conducted in a 0.9 m × 0.9 m × 2.5 m towing tank, where large-scale behavior of the jet, both stationary and towed at various speeds to simulate cross-flow, have been recorded at high speed. Preliminary data on oil droplet size and spatial distributions were also measured using a videoscope and pulsed light sheet. Sponsored by Gulf of Mexico Research Initiative (GoMRI).

  11. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    Science.gov (United States)

    Mandava, Pitchaiah; Krumpelman, Chase S; Shah, Jharna N; White, Donna L; Kent, Thomas A

    2013-01-01

    Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores ("Shift") is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall pdecrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.

  12. Detecting Neolithic Burial Mounds from LiDAR-Derived Elevation Data Using a Multi-Scale Approach and Machine Learning Techniques

    Directory of Open Access Journals (Sweden)

    Alexandre Guyot

    2018-02-01

    Full Text Available Airborne LiDAR technology is widely used in archaeology and over the past decade has emerged as an accurate tool to describe anthropomorphic landforms. Archaeological features are traditionally emphasised on a LiDAR-derived Digital Terrain Model (DTM using multiple Visualisation Techniques (VTs, and occasionally aided by automated feature detection or classification techniques. Such an approach offers limited results when applied to heterogeneous structures (different sizes, morphologies, which is often the case for archaeological remains that have been altered throughout the ages. This study proposes to overcome these limitations by developing a multi-scale analysis of topographic position combined with supervised machine learning algorithms (Random Forest. Rather than highlighting individual topographic anomalies, the multi-scalar approach allows archaeological features to be examined not only as individual objects, but within their broader spatial context. This innovative and straightforward method provides two levels of results: a composite image of topographic surface structure and a probability map of the presence of archaeological structures. The method was developed to detect and characterise megalithic funeral structures in the region of Carnac, the Bay of Quiberon, and the Gulf of Morbihan (France, which is currently considered for inclusion on the UNESCO World Heritage List. As a result, known archaeological sites have successfully been geo-referenced with a greater accuracy than before (even when located under dense vegetation and a ground-check confirmed the identification of a previously unknown Neolithic burial mound in the commune of Carnac.

  13. Large-Scale Sentinel-1 Processing for Solid Earth Science and Urgent Response using Cloud Computing and Machine Learning

    Science.gov (United States)

    Hua, H.; Owen, S. E.; Yun, S. H.; Agram, P. S.; Manipon, G.; Starch, M.; Sacco, G. F.; Bue, B. D.; Dang, L. B.; Linick, J. P.; Malarout, N.; Rosen, P. A.; Fielding, E. J.; Lundgren, P.; Moore, A. W.; Liu, Z.; Farr, T.; Webb, F.; Simons, M.; Gurrola, E. M.

    2017-12-01

    With the increased availability of open SAR data (e.g. Sentinel-1 A/B), new challenges are being faced with processing and analyzing the voluminous SAR datasets to make geodetic measurements. Upcoming SAR missions such as NISAR are expected to generate close to 100TB per day. The Advanced Rapid Imaging and Analysis (ARIA) project can now generate geocoded unwrapped phase and coherence products from Sentinel-1 TOPS mode data in an automated fashion, using the ISCE software. This capability is currently being exercised on various study sites across the United States and around the globe, including Hawaii, Central California, Iceland and South America. The automated and large-scale SAR data processing and analysis capabilities use cloud computing techniques to speed the computations and provide scalable processing power and storage. Aspects such as how to processing these voluminous SLCs and interferograms at global scales, keeping up with the large daily SAR data volumes, and how to handle the voluminous data rates are being explored. Scene-partitioning approaches in the processing pipeline help in handling global-scale processing up to unwrapped interferograms with stitching done at a late stage. We have built an advanced science data system with rapid search functions to enable access to the derived data products. Rapid image processing of Sentinel-1 data to interferograms and time series is already being applied to natural hazards including earthquakes, floods, volcanic eruptions, and land subsidence due to fluid withdrawal. We will present the status of the ARIA science data system for generating science-ready data products and challenges that arise from being able to process SAR datasets to derived time series data products at large scales. For example, how do we perform large-scale data quality screening on interferograms? What approaches can be used to minimize compute, storage, and data movement costs for time series analysis in the cloud? We will also

  14. Potential Size of and Value Proposition for H2@Scale Concept

    Energy Technology Data Exchange (ETDEWEB)

    Ruth, Mark F [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Jadun, Paige [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Pivovar, Bryan S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Elgowainy, Amgad [Argonne National Laboratory

    2017-11-09

    The H2@Scale concept is focused on developing hydrogen as an energy carrier and using hydrogen's properties to improve the national energy system. Specifically hydrogen has the abilities to (1) supply a clean energy source for industry and transportation and (2) increase the profitability of variable renewable electricity generators such as wind turbines and solar photovoltaic (PV) farms by providing value for otherwise potentially-curtailed electricity. Thus the concept also has the potential to reduce oil dependency by providing a low-carbon fuel for fuel cell electric vehicles (FCEVs), reduce emissions of carbon dioxide and pollutants such as NOx, and support domestic energy production, manufacturing, and U.S. economic competitiveness. The analysis reported here focuses on the potential market size and value proposition for the H2@Scale concept. It involves three analysis phases: 1. Initial phase estimating the technical potential for hydrogen markets and the resources required to meet them; 2. National-scale analysis of the economic potential for hydrogen and the interactions between willingness to pay by hydrogen users and the cost to produce hydrogen from various sources; and 3. In-depth analysis of spatial and economic issues impacting hydrogen production and utilization and the markets. Preliminary analysis of the technical potential indicates that the technical potential for hydrogen use is approximately 60 million metric tons (MMT) annually for light duty FCEVs, heavy duty vehicles, ammonia production, oil refining, biofuel hydrotreating, metals refining, and injection into the natural gas system. The technical potential of utility-scale PV and wind generation independently are much greater than that necessary to produce 60 MMT / year hydrogen. Uranium, natural gas, and coal reserves are each sufficient to produce 60 MMT / year hydrogen in addition to their current uses for decades to centuries. National estimates of the economic potential of

  15. The Macdonald and Savage titrimetric procedure scaled down to 4 mg sized plutonium samples. P. 1

    International Nuclear Information System (INIS)

    Kuvik, V.; Lecouteux, C.; Doubek, N.; Ronesch, K.; Jammet, G.; Bagliano, G.; Deron, S.

    1992-01-01

    The original Macdonald and Savage amperometric method scaled down to milligram-sized plutonium samples was further modified. The electro-chemical process of each redox step and the end-point of the final titration were monitored potentiometrically. The method is designed to determine 4 mg of plutonium dissolved in nitric acid solution. It is suitable for the direct determination of plutonium in non-irradiated fuel with a uranium-to-plutonium ratio of up to 30. The precision and accuracy are ca. 0.05-0.1% (relative standard deviation). Although the procedure is very selective, the following species interfere: vanadyl(IV) and vanadate (almost quantitatively), neptunium (one electron exchange per mole), nitrites, fluorosilicates (milligram amounts yield a slight bias) and iodates. (author). 15 refs.; 8 figs.; 7 tabs

  16. Size scale dependence of compressive instabilities in layered composites in the presence of stress gradients

    DEFF Research Database (Denmark)

    Poulios, Konstantinos; Niordson, Christian Frithiof

    2016-01-01

    The compressive strength of unidirectionally or layer-wise reinforced composite materials in direction parallel to their reinforcement is limited by micro-buckling instabilities. Although the inherent compressive strength of a given material micro-structure can easily be determined by assessing its...... compressive stress but also on spatial stress or strain gradients, rendering failure initiation size scale dependent. The present work demonstrates and investigates the aforementioned effect through numerical simulations of periodically layered structures withnotches and holes under bending and compressive...... loads, respectively. The presented results emphasize the importance of the reinforcing layer thickness on the load carrying capacity of the investigated structures, at a constant volumetric fraction of the reinforcement. The observed strengthening at higher values of the relative layer thickness...

  17. Brittle fracture in structural steels: perspectives at different size-scales.

    Science.gov (United States)

    Knott, John

    2015-03-28

    This paper describes characteristics of transgranular cleavage fracture in structural steel, viewed at different size-scales. Initially, consideration is given to structures and the service duty to which they are exposed at the macroscale, highlighting failure by plastic collapse and failure by brittle fracture. This is followed by sections describing the use of fracture mechanics and materials testing in carrying-out assessments of structural integrity. Attention then focuses on the microscale, explaining how values of the local fracture stress in notched bars or of fracture toughness in pre-cracked test-pieces are related to features of the microstructure: carbide thicknesses in wrought material; the sizes of oxide/silicate inclusions in weld metals. Effects of a microstructure that is 'heterogeneous' at the mesoscale are treated briefly, with respect to the extraction of test-pieces from thick sections and to extrapolations of data to low failure probabilities. The values of local fracture stress may be used to infer a local 'work-of-fracture' that is found experimentally to be a few times greater than that of two free surfaces. Reasons for this are discussed in the conclusion section on nano-scale events. It is suggested that, ahead of a sharp crack, it is necessary to increase the compliance by a cooperative movement of atoms (involving extra work) to allow the crack-tip bond to displace sufficiently for the energy of attraction between the atoms to reduce to zero. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  18. Size-selective pulmonary dose indices for metal-working fluid aerosols in machining and grinding operations in the automobile manufacturing industry.

    Science.gov (United States)

    Woskie, S R; Smith, T J; Hallock, M F; Hammond, S K; Rosenthal, F; Eisen, E A; Kriebel, D; Greaves, I A

    1994-01-01

    The current metal-working fluid exposures at three locations that manufacture automotive parts were assessed in conjunction with epidemiological studies of the mortality and respiratory morbidity experiences of workers at these plants. A rationale is presented for selecting and characterizing epidemiologic exposure groups in this environment. More than 475 full-shift personal aerosol samples were taken using a two-stage personal cascade impactor with median size cut-offs of 9.8 microns and 3.5 microns, plus a backup filter. For a sample of 403 workers exposed to aerosols of machining or grinding fluids, the mean total exposure was 706 micrograms/m3 (standard error (SE) = 21 micrograms/m3). Among 72 assemblers unexposed to machining fluids, the mean total exposure was 187 +/- 10 (SE) micrograms/m3. An analysis of variance model identified factors significantly associated with exposure level and permitted estimates of exposure for workers in the unsampled machine type/metal-working fluid groups. Comparison of the results obtained from personal impactor samples with predictions from an aerosol-deposition model for the human respiratory tract showed high correlation. However, the amount collected on the impactor stage underestimates extrathoracic deposition and overestimates tracheobronchial and alveolar deposition, as calculated by the deposition model. When both the impactor concentration and the deposition-model concentration were used to estimate cumulative thoracic concentrations for the worklives of a subset of auto workers, there was no significant difference in the rank order of the subjects' cumulative concentration. However, the cumulative impactor concentration values were significantly higher than the cumulative deposition-model concentration values for the subjects.

  19. Relationship Between Ureteral Jet Flow, Visual Analogue Scale, and Ureteral Stone Size.

    Science.gov (United States)

    Ongun, Sakir; Teken, Abdurrazak; Yılmaz, Orkun; Süleyman, Sakir

    2017-06-01

    To contribute to the diagnosis and treatment of ureteral stones by investigating the relationship between the ureteral jet flow measurements of patients with ureteral stones and the size of the stones and the patients' pain scores. The sample consisted of patients who presented acute renal colic between December 2014 and 2015 and from a noncontrast computed tomography were found to have a urinary stone. The ureteral jet flow velocities were determined using Doppler ultrasonography. The patients were all assessed in terms of stone size, localization and area, anteroposterior pelvis (AP) diameter, and visual analogue scale (VAS) scores. A total of 102 patients were included in the study. As the VAS score decreased, the peak jet flow velocity on the stone side increased, whereas the flow velocity on the other side, AP diameter, and stone area were reduced (P flow velocity was reduced and the AP diameter increased significantly (P flow was not observed in 17 patients on the stone side. A statistically significant difference was found between these patients and the remaining patients in terms of all parameters (P flow velocity of ureteral jet is low and with a severe level of pain or the peak flow velocity of ureteral jet cannot be measured, there is a low possibility of spontaneous passage and a high possibility of a large stone, and therefore the treatment should be started immediately. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Finite-size scaling method for the Berezinskii–Kosterlitz–Thouless transition

    International Nuclear Information System (INIS)

    Hsieh, Yun-Da; Kao, Ying-Jer; Sandvik, Anders W

    2013-01-01

    We test an improved finite-size scaling method for reliably extracting the critical temperature T BKT of a Berezinskii–Kosterlitz–Thouless (BKT) transition. Using known single-parameter logarithmic corrections to the spin stiffness ρ s at T BKT in combination with the Kosterlitz–Nelson relation between the transition temperature and the stiffness, ρ s (T BKT ) = 2T BKT /π, we define a size-dependent transition temperature T BKT (L 1 ,L 2 ) based on a pair of system sizes L 1 ,L 2 , e.g., L 2 = 2L 1 . We use Monte Carlo data for the standard two-dimensional classical XY model to demonstrate that this quantity is well behaved and can be reliably extrapolated to the thermodynamic limit using the next expected logarithmic correction beyond the ones included in defining T BKT (L 1 ,L 2 ). For the Monte Carlo calculations we use GPU (graphical processing unit) computing to obtain high-precision data for L up to 512. We find that the sub-leading logarithmic corrections have significant effects on the extrapolation. Our result T BKT = 0.8935(1) is several error bars above the previously best estimates of the transition temperature, T BKT ≈ 0.8929. If only the leading log-correction is used, the result is, however, consistent with the lower value, suggesting that previous works have underestimated T BKT because of the neglect of sub-leading logarithms. Our method is easy to implement in practice and should be applicable to generic BKT transitions. (paper)

  1. Scaling in Free-Swimming Fish and Implications for Measuring Size-at-Time in the Wild

    Science.gov (United States)

    Broell, Franziska; Taggart, Christopher T.

    2015-01-01

    This study was motivated by the need to measure size-at-age, and thus growth rate, in fish in the wild. We postulated that this could be achieved using accelerometer tags based first on early isometric scaling models that hypothesize that similar animals should move at the same speed with a stroke frequency that scales with length-1, and second on observations that the speed of primarily air-breathing free-swimming animals, presumably swimming ‘efficiently’, is independent of size, confirming that stroke frequency scales as length-1. However, such scaling relations between size and swimming parameters for fish remain mostly theoretical. Based on free-swimming saithe and sturgeon tagged with accelerometers, we introduce a species-specific scaling relationship between dominant tail beat frequency (TBF) and fork length. Dominant TBF was proportional to length-1 (r2 = 0.73, n = 40), and estimated swimming speed within species was independent of length. Similar scaling relations accrued in relation to body mass-0.29. We demonstrate that the dominant TBF can be used to estimate size-at-time and that accelerometer tags with onboard processing may be able to provide size-at-time estimates among free-swimming fish and thus the estimation of growth rate (change in size-at-time) in the wild. PMID:26673777

  2. Scaling in Free-Swimming Fish and Implications for Measuring Size-at-Time in the Wild.

    Directory of Open Access Journals (Sweden)

    Franziska Broell

    Full Text Available This study was motivated by the need to measure size-at-age, and thus growth rate, in fish in the wild. We postulated that this could be achieved using accelerometer tags based first on early isometric scaling models that hypothesize that similar animals should move at the same speed with a stroke frequency that scales with length-1, and second on observations that the speed of primarily air-breathing free-swimming animals, presumably swimming 'efficiently', is independent of size, confirming that stroke frequency scales as length-1. However, such scaling relations between size and swimming parameters for fish remain mostly theoretical. Based on free-swimming saithe and sturgeon tagged with accelerometers, we introduce a species-specific scaling relationship between dominant tail beat frequency (TBF and fork length. Dominant TBF was proportional to length-1 (r2 = 0.73, n = 40, and estimated swimming speed within species was independent of length. Similar scaling relations accrued in relation to body mass-0.29. We demonstrate that the dominant TBF can be used to estimate size-at-time and that accelerometer tags with onboard processing may be able to provide size-at-time estimates among free-swimming fish and thus the estimation of growth rate (change in size-at-time in the wild.

  3. Scaling law and enhancement of lift generation of an insect-size hovering flexible wing

    Science.gov (United States)

    Kang, Chang-kwon; Shyy, Wei

    2013-01-01

    We report a comprehensive scaling law and novel lift generation mechanisms relevant to the aerodynamic functions of structural flexibility in insect flight. Using a Navier–Stokes equation solver, fully coupled to a structural dynamics solver, we consider the hovering motion of a wing of insect size, in which the dynamics of fluid–structure interaction leads to passive wing rotation. Lift generated on the flexible wing scales with the relative shape deformation parameter, whereas the optimal lift is obtained when the wing deformation synchronizes with the imposed translation, consistent with previously reported observations for fruit flies and honeybees. Systematic comparisons with rigid wings illustrate that the nonlinear response in wing motion results in a greater peak angle compared with a simple harmonic motion, yielding higher lift. Moreover, the compliant wing streamlines its shape via camber deformation to mitigate the nonlinear lift-degrading wing–wake interaction to further enhance lift. These bioinspired aeroelastic mechanisms can be used in the development of flapping wing micro-robots. PMID:23760300

  4. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    Directory of Open Access Journals (Sweden)

    Pitchaiah Mandava

    Full Text Available OBJECTIVE: Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS, a range of scores ("Shift" is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. METHODS: We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. RESULTS: Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD. Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall p<0.001. Taking errors into account, SAINT I would have required 24% more subjects than were randomized. CONCLUSION: We show when uncertainty in assessments is considered, the lowest error rates are with dichotomization. While using the full range of mRS is conceptually appealing, a gain of information is counter-balanced by a decrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We

  5. Effects of Isometric Brain-Body Size Scaling on the Complexity of Monoaminergic Neurons in a Minute Parasitic Wasp

    NARCIS (Netherlands)

    Woude, van der Emma; Smid, Hans M.

    2017-01-01

    Trichogramma evanescens parasitic wasps show large phenotypic plasticity in brain and body size, resulting in a 5-fold difference in brain volume among genetically identical sister wasps. Brain volume scales linearly with body volume in these wasps. This isometric brain scaling forms an exception to

  6. Size and shape characteristics of drumlins, derived from a large sample, and associated scaling laws

    Science.gov (United States)

    Clark, Chris D.; Hughes, Anna L. C.; Greenwood, Sarah L.; Spagnolo, Matteo; Ng, Felix S. L.

    2009-04-01

    Ice sheets flowing across a sedimentary bed usually produce a landscape of blister-like landforms streamlined in the direction of the ice flow and with each bump of the order of 10 2 to 10 3 m in length and 10 1 m in relief. Such landforms, known as drumlins, have mystified investigators for over a hundred years. A satisfactory explanation for their formation, and thus an appreciation of their glaciological significance, has remained elusive. A recent advance has been in numerical modelling of the land-forming process. In anticipation of future modelling endeavours, this paper is motivated by the requirement for robust data on drumlin size and shape for model testing. From a systematic programme of drumlin mapping from digital elevation models and satellite images of Britain and Ireland, we used a geographic information system to compile a range of statistics on length L, width W, and elongation ratio E (where E = L/ W) for a large sample. Mean L, is found to be 629 m ( n = 58,983), mean W is 209 m and mean E is 2.9 ( n = 37,043). Most drumlins are between 250 and 1000 metres in length; between 120 and 300 metres in width; and between 1.7 and 4.1 times as long as they are wide. Analysis of such data and plots of drumlin width against length reveals some new insights. All frequency distributions are unimodal from which we infer that the geomorphological label of 'drumlin' is fair in that this is a true single population of landforms, rather than an amalgam of different landform types. Drumlin size shows a clear minimum bound of around 100 m (horizontal). Maybe drumlins are generated at many scales and this is the minimum, or this value may be an indication of the fundamental scale of bump generation ('proto-drumlins') prior to them growing and elongating. A relationship between drumlin width and length is found (with r2 = 0.48) and that is approximately W = 7 L 1/2 when measured in metres. A surprising and sharply-defined line bounds the data cloud plotted in E- W

  7. Size-scaling behaviour of the electronic polarizability of one-dimensional interacting systems

    Science.gov (United States)

    Chiappe, G.; Louis, E.; Vergés, J. A.

    2018-05-01

    Electronic polarizability of finite chains is accurately calculated from the total energy variation of the system produced by small but finite static electric fields applied along the chain direction. Normalized polarizability, that is, polarizability divided by chain length, diverges as the second power of length for metallic systems but approaches a constant value for insulating systems. This behaviour provides a very convenient way to characterize the wave-function malleability of finite systems as it avoids the need of attaching infinite contacts to the chain ends. Hubbard model calculations at half filling show that the method works for a small U  =  1 interaction value that corresponds to a really small spectral gap of 0.005 (hopping t  =  ‑1 is assumed). Once successfully checked, the method has been applied to the long-range hopping model of Gebhard and Ruckenstein showing 1/r hopping decay (Gebhard and Ruckenstein 1992 Phys. Rev. Lett. 68 244; Gebhard et al 1994 Phys. Rev. B 49 10926). Metallicity for U values below the reported metal-insulator transition is obtained but the surprise comes for U values larger than the critical one (when a gap appears in the spectral density of states) because a steady increase of the normalized polarizability with size is obtained. This critical size-scaling behaviour can be understood as corresponding to a molecule which polarizability is unbounded. We have checked that a real transfer of charge from one chain end to the opposite occurs as a response to very small electric fields in spite of the existence of a large gap of the order of U for one-particle excitations. Finally, ab initio quantum chemistry calculations of realistic poly-acetylene chains prove that the occurrence of such critical behaviour in real systems is unlikely.

  8. Using the raindrop size distribution to quantify the soil detachment rate at the laboratory scale

    Science.gov (United States)

    Jomaa, S.; Jaffrain, J.; Barry, D. A.; Berne, A.; Sander, G. C.

    2010-05-01

    Rainfall simulators are beneficial tools for studying soil erosion processes and sediment transport for different circumstances and scales. They are useful to better understand soil erosion mechanisms and, therefore, to develop and validate process-based erosion models. Simulators permit experimental replicates for both simple and complex configurations. The 2 m × 6 m EPFL erosion flume is equipped with a hydraulic slope control and a sprinkling system located on oscillating bars 3 m above the surface. It provides a near-uniform spatial rainfall distribution. The intensity of the precipitation can be adjusted by changing the oscillation interval. The flume is filled to a depth of 0.32 m with an agricultural loamy soil. Raindrop detachment is an important process in interrill erosion, the latter varying with the soil properties as well as the raindrop size distribution and drop velocity. Since the soil detachment varies with the kinetic energy of raindrops, an accurate characterization of drop size distribution (DSD, measured, e.g., using a laser disdrometer) can potentially support erosion calculations. Here, a laser disdrometer was used at different rainfall intensities in the EPFL flume to quantify the rainfall event in terms of number of drops, diameter and velocity. At the same time, soil particle motion was measured locally using splash cups. These cups measured the detached material rates into upslope and downslope compartments. In contrast to previously reported splash cup experiments, the cups used in this study were equipped at the top with upside-down funnels, the upper part having the same diameter as the soil sampled at the bottom. This ensured that the soil detached and captured by the device was not re-exposed to rainfall. The experimental data were used to quantify the relationship between the raindrop distribution and the splash-driven sediment transport.

  9. Significant enhancement of magnetoresistance with the reduction of particle size in nanometer scale

    Science.gov (United States)

    Das, Kalipada; Dasgupta, P.; Poddar, A.; Das, I.

    2016-01-01

    The Physics of materials with large magnetoresistance (MR), defined as the percentage change of electrical resistance with the application of external magnetic field, has been an active field of research for quite some times. In addition to the fundamental interest, large MR has widespread application that includes the field of magnetic field sensor technology. New materials with large MR is interesting. However it is more appealing to vast scientific community if a method describe to achieve many fold enhancement of MR of already known materials. Our study on several manganite samples [La1−xCaxMnO3 (x = 0.52, 0.54, 0.55)] illustrates the method of significant enhancement of MR with the reduction of the particle size in nanometer scale. Our experimentally observed results are explained by considering model consisted of a charge ordered antiferromagnetic core and a shell having short range ferromagnetic correlation between the uncompensated surface spins in nanoscale regime. The ferromagnetic fractions obtained theoretically in the nanoparticles has been shown to be in the good agreement with the experimental results. The method of several orders of magnitude improvement of the magnetoresistive property will have enormous potential for magnetic field sensor technology. PMID:26837285

  10. Determining the Particle Size of Debris from a Tunnel Boring Machine Through Photographic Analysis and Comparison Between Excavation Performance and Rock Mass Properties

    Science.gov (United States)

    Rispoli, A.; Ferrero, A. M.; Cardu, M.; Farinetti, A.

    2017-10-01

    This paper presents the results of a study carried out on a 6.3-m-diameter exploratory tunnel excavated in hard rock by an open tunnel boring machine (TBM). The study provides a methodology, based on photographic analysis, for the evaluation of the particle size distribution of debris produced by the TBM. A number of tests were carried out on the debris collected during the TBM advancement. In order to produce a parameter indicative of the particle size of the debris, the coarseness index (CI) was defined and compared with some parameters representative of the TBM performance [i.e. the excavation specific energy (SE) and field penetration index (FPI)] and rock mass features, such as RMR, GSI, uniaxial compression strength and joint spacing. The results obtained showed a clear trend between the CI and some TBM performance parameters, such as SE and FPI. On the contrary, due to the rock mass fracturing, a clear relationship between the CI and rock mass characteristics was not found.

  11. Design of water-repellant coating using dual scale size of hybrid silica nanoparticles on polymer surface

    Science.gov (United States)

    Conti, J.; De Coninck, J.; Ghazzal, M. N.

    2018-04-01

    The dual-scale size of the silica nanoparticles is commonly aimed at producing dual-scale roughness, also called hierarchical roughness (Lotus effect). In this study, we describe a method to build a stable water-repellant coating with controlled roughness. Hybrid silica nanoparticles are self-assembled over a polymeric surface by alternating consecutive layers. Each one uses homogenously distributed silica nanoparticles of a particular size. The effect of the nanoparticle size of the first layer on the final roughness of the coating is studied. The first layer enables to adjust the distance between the silica nanoparticles of the upper layer, leading to a tuneable and controlled final roughness. An optimal size nanoparticle has been found for higher water-repellency. Furthermore, the stability of the coating on polymeric surface (Polycarbonate substrate) is ensured by photopolymerization of hybridized silica nanoparticles using Vinyl functional groups.

  12. Nano Mechanical Machining Using AFM Probe

    Science.gov (United States)

    Mostofa, Md. Golam

    Complex miniaturized components with high form accuracy will play key roles in the future development of many products, as they provide portability, disposability, lower material consumption in production, low power consumption during operation, lower sample requirements for testing, and higher heat transfer due to their very high surface-to-volume ratio. Given the high market demand for such micro and nano featured components, different manufacturing methods have been developed for their fabrication. Some of the common technologies in micro/nano fabrication are photolithography, electron beam lithography, X-ray lithography and other semiconductor processing techniques. Although these methods are capable of fabricating micro/nano structures with a resolution of less than a few nanometers, some of the shortcomings associated with these methods, such as high production costs for customized products, limited material choices, necessitate the development of other fabricating techniques. Micro/nano mechanical machining, such an atomic force microscope (AFM) probe based nano fabrication, has, therefore, been used to overcome some the major restrictions of the traditional processes. This technique removes material from the workpiece by engaging micro/nano size cutting tool (i.e. AFM probe) and is applicable on a wider range of materials compared to the photolithographic process. In spite of the unique benefits of nano mechanical machining, there are also some challenges with this technique, since the scale is reduced, such as size effects, burr formations, chip adhesions, fragility of tools and tool wear. Moreover, AFM based machining does not have any rotational movement, which makes fabrication of 3D features more difficult. Thus, vibration-assisted machining is introduced into AFM probe based nano mechanical machining to overcome the limitations associated with the conventional AFM probe based scratching method. Vibration-assisted machining reduced the cutting forces

  13. Population size estimation of men who have sex with men through the network scale-up method in Japan.

    Directory of Open Access Journals (Sweden)

    Satoshi Ezoe

    Full Text Available BACKGROUND: Men who have sex with men (MSM are one of the groups most at risk for HIV infection in Japan. However, size estimates of MSM populations have not been conducted with sufficient frequency and rigor because of the difficulty, high cost and stigma associated with reaching such populations. This study examined an innovative and simple method for estimating the size of the MSM population in Japan. We combined an internet survey with the network scale-up method, a social network method for estimating the size of hard-to-reach populations, for the first time in Japan. METHODS AND FINDINGS: An internet survey was conducted among 1,500 internet users who registered with a nationwide internet-research agency. The survey participants were asked how many members of particular groups with known population sizes (firepersons, police officers, and military personnel they knew as acquaintances. The participants were also asked to identify the number of their acquaintances whom they understood to be MSM. Using these survey results with the network scale-up method, the personal network size and MSM population size were estimated. The personal network size was estimated to be 363.5 regardless of the sex of the acquaintances and 174.0 for only male acquaintances. The estimated MSM prevalence among the total male population in Japan was 0.0402% without adjustment, and 2.87% after adjusting for the transmission error of MSM. CONCLUSIONS: The estimated personal network size and MSM prevalence seen in this study were comparable to those from previous survey results based on the direct-estimation method. Estimating population sizes through combining an internet survey with the network scale-up method appeared to be an effective method from the perspectives of rapidity, simplicity, and low cost as compared with more-conventional methods.

  14. Using a Virtual Experiment to Analyze Infiltration Process from Point to Grid-cell Size Scale

    Science.gov (United States)

    Barrios, M. I.

    2013-12-01

    The hydrological science requires the emergence of a consistent theoretical corpus driving the relationships between dominant physical processes at different spatial and temporal scales. However, the strong spatial heterogeneities and non-linearities of these processes make difficult the development of multiscale conceptualizations. Therefore, scaling understanding is a key issue to advance this science. This work is focused on the use of virtual experiments to address the scaling of vertical infiltration from a physically based model at point scale to a simplified physically meaningful modeling approach at grid-cell scale. Numerical simulations have the advantage of deal with a wide range of boundary and initial conditions against field experimentation. The aim of the work was to show the utility of numerical simulations to discover relationships between the hydrological parameters at both scales, and to use this synthetic experience as a media to teach the complex nature of this hydrological process. The Green-Ampt model was used to represent vertical infiltration at point scale; and a conceptual storage model was employed to simulate the infiltration process at the grid-cell scale. Lognormal and beta probability distribution functions were assumed to represent the heterogeneity of soil hydraulic parameters at point scale. The linkages between point scale parameters and the grid-cell scale parameters were established by inverse simulations based on the mass balance equation and the averaging of the flow at the point scale. Results have shown numerical stability issues for particular conditions and have revealed the complex nature of the non-linear relationships between models' parameters at both scales and indicate that the parameterization of point scale processes at the coarser scale is governed by the amplification of non-linear effects. The findings of these simulations have been used by the students to identify potential research questions on scale issues

  15. A statistical methodology to derive the scaling law for the H-mode power threshold using a large multi-machine database

    International Nuclear Information System (INIS)

    Murari, A.; Lupelli, I.; Gaudio, P.; Gelfusa, M.; Vega, J.

    2012-01-01

    In this paper, a refined set of statistical techniques is developed and then applied to the problem of deriving the scaling law for the threshold power to access the H-mode of confinement in tokamaks. This statistical methodology is applied to the 2010 version of the ITPA International Global Threshold Data Base v6b(IGDBTHv6b). To increase the engineering and operative relevance of the results, only macroscopic physical quantities, measured in the vast majority of experiments, have been considered as candidate variables in the models. Different principled methods, such as agglomerative hierarchical variables clustering, without assumption about the functional form of the scaling, and nonlinear regression, are implemented to select the best subset of candidate independent variables and to improve the regression model accuracy. Two independent model selection criteria, based on the classical (Akaike information criterion) and Bayesian formalism (Bayesian information criterion), are then used to identify the most efficient scaling law from candidate models. The results derived from the full multi-machine database confirm the results of previous analysis but emphasize the importance of shaping quantities, elongation and triangularity. On the other hand, the scaling laws for the different machines and at different currents are different from each other at the level of confidence well above 95%, suggesting caution in the use of the global scaling laws for both interpretation and extrapolation purposes. (paper)

  16. Investigation the gas film in micro scale induced error on the performance of the aerostatic spindle in ultra-precision machining

    Science.gov (United States)

    Chen, Dongju; Huo, Chen; Cui, Xianxian; Pan, Ri; Fan, Jinwei; An, Chenhui

    2018-05-01

    The objective of this work is to study the influence of error induced by gas film in micro-scale on the static and dynamic behavior of a shaft supported by the aerostatic bearings. The static and dynamic balance models of the aerostatic bearing are presented by the calculated stiffness and damping in micro scale. The static simulation shows that the deformation of aerostatic spindle system in micro scale is decreased. For the dynamic behavior, both the stiffness and damping in axial and radial directions are increased in micro scale. The experiments of the stiffness and rotation error of the spindle show that the deflection of the shaft resulting from the calculating parameters in the micro scale is very close to the deviation of the spindle system. The frequency information in transient analysis is similar to the actual test, and they are also higher than the results from the traditional case without considering micro factor. Therefore, it can be concluded that the value considering micro factor is closer to the actual work case of the aerostatic spindle system. These can provide theoretical basis for the design and machining process of machine tools.

  17. Electric machines

    CERN Document Server

    Gross, Charles A

    2006-01-01

    BASIC ELECTROMAGNETIC CONCEPTSBasic Magnetic ConceptsMagnetically Linear Systems: Magnetic CircuitsVoltage, Current, and Magnetic Field InteractionsMagnetic Properties of MaterialsNonlinear Magnetic Circuit AnalysisPermanent MagnetsSuperconducting MagnetsThe Fundamental Translational EM MachineThe Fundamental Rotational EM MachineMultiwinding EM SystemsLeakage FluxThe Concept of Ratings in EM SystemsSummaryProblemsTRANSFORMERSThe Ideal n-Winding TransformerTransformer Ratings and Per-Unit ScalingThe Nonideal Three-Winding TransformerThe Nonideal Two-Winding TransformerTransformer Efficiency and Voltage RegulationPractical ConsiderationsThe AutotransformerOperation of Transformers in Three-Phase EnvironmentsSequence Circuit Models for Three-Phase Transformer AnalysisHarmonics in TransformersSummaryProblemsBASIC MECHANICAL CONSIDERATIONSSome General PerspectivesEfficiencyLoad Torque-Speed CharacteristicsMass Polar Moment of InertiaGearingOperating ModesTranslational SystemsA Comprehensive Example: The ElevatorP...

  18. Multi-objective component sizing of a power-split plug-in hybrid electric vehicle powertrain using Pareto-based natural optimization machines

    Science.gov (United States)

    Mozaffari, Ahmad; Vajedi, Mahyar; Chehresaz, Maryyeh; Azad, Nasser L.

    2016-03-01

    The urgent need to meet increasingly tight environmental regulations and new fuel economy requirements has motivated system science researchers and automotive engineers to take advantage of emerging computational techniques to further advance hybrid electric vehicle and plug-in hybrid electric vehicle (PHEV) designs. In particular, research has focused on vehicle powertrain system design optimization, to reduce the fuel consumption and total energy cost while improving the vehicle's driving performance. In this work, two different natural optimization machines, namely the synchronous self-learning Pareto strategy and the elitism non-dominated sorting genetic algorithm, are implemented for component sizing of a specific power-split PHEV platform with a Toyota plug-in Prius as the baseline vehicle. To do this, a high-fidelity model of the Toyota plug-in Prius is employed for the numerical experiments using the Autonomie simulation software. Based on the simulation results, it is demonstrated that Pareto-based algorithms can successfully optimize the design parameters of the vehicle powertrain.

  19. Integrated Multi-Scale Data Analytics and Machine Learning for the Distribution Grid and Building-to-Grid Interface

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, Emma M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hendrix, Val [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chertkov, Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Deka, Deepjyoti [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-16

    This white paper introduces the application of advanced data analytics to the modernized grid. In particular, we consider the field of machine learning and where it is both useful, and not useful, for the particular field of the distribution grid and buildings interface. While analytics, in general, is a growing field of interest, and often seen as the golden goose in the burgeoning distribution grid industry, its application is often limited by communications infrastructure, or lack of a focused technical application. Overall, the linkage of analytics to purposeful application in the grid space has been limited. In this paper we consider the field of machine learning as a subset of analytical techniques, and discuss its ability and limitations to enable the future distribution grid and the building-to-grid interface. To that end, we also consider the potential for mixing distributed and centralized analytics and the pros and cons of these approaches. Machine learning is a subfield of computer science that studies and constructs algorithms that can learn from data and make predictions and improve forecasts. Incorporation of machine learning in grid monitoring and analysis tools may have the potential to solve data and operational challenges that result from increasing penetration of distributed and behind-the-meter energy resources. There is an exponentially expanding volume of measured data being generated on the distribution grid, which, with appropriate application of analytics, may be transformed into intelligible, actionable information that can be provided to the right actors – such as grid and building operators, at the appropriate time to enhance grid or building resilience, efficiency, and operations against various metrics or goals – such as total carbon reduction or other economic benefit to customers. While some basic analysis into these data streams can provide a wealth of information, computational and human boundaries on performing the analysis

  20. Determining the size of a complete disturbance landscape: multi-scale, continental analysis of forest change

    Science.gov (United States)

    Brian Buma; Jennifer K Costanza; Kurt Riitters

    2017-01-01

    The scale of investigation for disturbanceinfluenced processes plays a critical role in theoretical assumptions about stability, variance, and equilibrium, as well as conservation reserve and long-term monitoring program design. Critical consideration of scale is required for robust planning designs, especially when anticipating future disturbances whose exact...

  1. The Effects of Transient Emotional State and Workload on Size Scaling in Perspective Displays

    Energy Technology Data Exchange (ETDEWEB)

    Tuan Q. Tran; Kimberly R. Raddatz

    2006-10-01

    Previous research has been devoted to the study of perceptual (e.g., number of depth cues) and cognitive (e.g., instructional set) factors that influence veridical size perception in perspective displays. However, considering that perspective displays have utility in high workload environments that often induce high arousal (e.g., aircraft cockpits), the present study sought to examine the effect of observers’ emotional state on the ability to perceive and judge veridical size. Within a dual-task paradigm, observers’ ability to make accurate size judgments was examined under conditions of induced emotional state (positive, negative, neutral) and high and low workload. Results showed that participants in both positive and negative induced emotional states were slower to make accurate size judgments than those not under induced emotional arousal. Results suggest that emotional state is an important factor that influences visual performance on perspective displays and is worthy of further study.

  2. Influence of scale-dependent fracture intensity on block size distribution and rock slope failure mechanisms in a DFN framework

    Science.gov (United States)

    Agliardi, Federico; Galletti, Laura; Riva, Federico; Zanchi, Andrea; Crosta, Giovanni B.

    2017-04-01

    An accurate characterization of the geometry and intensity of discontinuities in a rock mass is key to assess block size distribution and degree of freedom. These are the main controls on the magnitude and mechanisms of rock slope instabilities (structurally-controlled, step-path or mass failures) and rock mass strength and deformability. Nevertheless, the use of over-simplified discontinuity characterization approaches, unable to capture the stochastic nature of discontinuity features, often hampers a correct identification of dominant rock mass behaviour. Discrete Fracture Network (DFN) modelling tools have provided new opportunities to overcome these caveats. Nevertheless, their ability to provide a representative picture of reality strongly depends on the quality and scale of field data collection. Here we used DFN modelling with FracmanTM to investigate the influence of fracture intensity, characterized on different scales and with different techniques, on the geometry and size distribution of generated blocks, in a rock slope stability perspective. We focused on a test site near Lecco (Southern Alps, Italy), where 600 m high cliffs in thickly-bedded limestones folded at the slope scale impend on the Lake Como. We characterized the 3D slope geometry by Structure-from-Motion photogrammetry (range: 150-1500m; point cloud density > 50 pts/m2). Since the nature and attributes of discontinuities are controlled by brittle failure processes associated to large-scale folding, we performed a field characterization of meso-structural features (faults and related kinematics, vein and joint associations) in different fold domains. We characterized the discontinuity populations identified by structural geology on different spatial scales ranging from outcrops (field surveys and photo-mapping) to large slope sectors (point cloud and photo-mapping). For each sampling domain, we characterized discontinuity orientation statistics and performed fracture mapping and circular

  3. Size Scales for Thermal Inhomogeneities in Mars' Atmosphere Surface Layer: Mars Pathfinder

    Science.gov (United States)

    Mihalov, John D.; Haberle, Robert M.; Seiff, Alvin; Murphy, James R.; Schofield, John T.; DeVincenzi, Donald L. (Technical Monitor)

    2000-01-01

    Atmospheric temperature measurement at three heights with thin wire thermocouples on the 1.1 m Mars Pathfinder meteorology must allow estimates of the integral scale of the atmospheric thermal turbulence during an 83 sol period that begins in the summer. The integral scale is a measure for regions of perturbations. In turbulent media that roughly characterizes locations where the perturbations are correlated. Excluding some to intervals with violent excursions of the mean temperatures, integral scale values are found that increase relatively rapidly from a few tenths meters or less near down to several meters by mid-morning. During mid-morning, the diurnal and shorter time scale wind direction variations often place the meteorology mast in the thermal wake of the Lander.

  4. Size exclusion chromatography for semipreparative scale separation of Au38(SR)24 and Au40(SR)24 and larger clusters.

    Science.gov (United States)

    Knoppe, Stefan; Boudon, Julien; Dolamic, Igor; Dass, Amala; Bürgi, Thomas

    2011-07-01

    Size exclusion chromatography (SEC) on a semipreparative scale (10 mg and more) was used to size-select ultrasmall gold nanoclusters (<2 nm) from polydisperse mixtures. In particular, the ubiquitous byproducts of the etching process toward Au(38)(SR)(24) (SR, thiolate) clusters were separated and gained in high monodispersity (based on mass spectrometry). The isolated fractions were characterized by UV-vis spectroscopy, MALDI mass spectrometry, HPLC, and electron microscopy. Most notably, the separation of Au(38)(SR)(24) and Au(40)(SR)(24) clusters is demonstrated.

  5. Insulin/IGF-regulated size scaling of neuroendocrine cells expressing the bHLH transcription factor Dimmed in Drosophila.

    Directory of Open Access Journals (Sweden)

    Jiangnan Luo

    Full Text Available Neurons and other cells display a large variation in size in an organism. Thus, a fundamental question is how growth of individual cells and their organelles is regulated. Is size scaling of individual neurons regulated post-mitotically, independent of growth of the entire CNS? Although the role of insulin/IGF-signaling (IIS in growth of tissues and whole organisms is well established, it is not known whether it regulates the size of individual neurons. We therefore studied the role of IIS in the size scaling of neurons in the Drosophila CNS. By targeted genetic manipulations of insulin receptor (dInR expression in a variety of neuron types we demonstrate that the cell size is affected only in neuroendocrine cells specified by the bHLH transcription factor DIMMED (DIMM. Several populations of DIMM-positive neurons tested displayed enlarged cell bodies after overexpression of the dInR, as well as PI3 kinase and Akt1 (protein kinase B, whereas DIMM-negative neurons did not respond to dInR manipulations. Knockdown of these components produce the opposite phenotype. Increased growth can also be induced by targeted overexpression of nutrient-dependent TOR (target of rapamycin signaling components, such as Rheb (small GTPase, TOR and S6K (S6 kinase. After Dimm-knockdown in neuroendocrine cells manipulations of dInR expression have significantly less effects on cell size. We also show that dInR expression in neuroendocrine cells can be altered by up or down-regulation of Dimm. This novel dInR-regulated size scaling is seen during postembryonic development, continues in the aging adult and is diet dependent. The increase in cell size includes cell body, axon terminations, nucleus and Golgi apparatus. We suggest that the dInR-mediated scaling of neuroendocrine cells is part of a plasticity that adapts the secretory capacity to changing physiological conditions and nutrient-dependent organismal growth.

  6. Size scaling effects on the particle density fluctuations in confined plasmas

    International Nuclear Information System (INIS)

    Vazquez, Federico; Markus, Ferenc

    2009-01-01

    In this paper, memory and nonlocal effects on fluctuating mass diffusion are addressed in the context of fusion plasmas. Nonlocal effects are included by considering a diffusivity coefficient depending on the size of the container in the transverse direction to the applied magnetic field. It is obtained by resorting to the general formulation of the extended version of irreversible thermodynamics in terms of the higher order dissipative fluxes. The developed model describes two different types of the particle density time correlation function. Both have been observed in tokamak and nontokamak devices. These two kinds of time correlation function characterize the wave and the diffusive transport mechanisms of particle density perturbations. A transition between them is found, which is controlled by the size of the container. A phase diagram in the (L,2π/k) space describes the relation between the dynamics of particle density fluctuations and the size L of the system together with the oscillating mode k of the correlation function.

  7. Size Scaling and Bursting Activity in Thermally Activated Breakdown of Fiber Bundles

    KAUST Repository

    Yoshioka, Naoki

    2008-10-03

    We study subcritical fracture driven by thermally activated damage accumulation in the framework of fiber bundle models. We show that in the presence of stress inhomogeneities, thermally activated cracking results in an anomalous size effect; i.e., the average lifetime tf decreases as a power law of the system size tf ∼L-z, where the exponent z depends on the external load σ and on the temperature T in the form z∼f(σ/T3/2). We propose a modified form of the Arrhenius law which provides a comprehensive description of thermally activated breakdown. Thermal fluctuations trigger bursts of breakings which have a power law size distribution. © 2008 The American Physical Society.

  8. Size effects and strain localization in atomic-scale cleavage modeling

    International Nuclear Information System (INIS)

    Elsner, B A M; Müller, S

    2015-01-01

    In this work, we study the adhesion and decohesion of Cu(1 0 0) surfaces using density functional theory (DFT) calculations. An upper stress to surface decohesion is obtained via the universal binding energy relation (UBER), but the model is limited to rigid separation of bulk-terminated surfaces. When structural relaxations are included, an unphysical size effect arises if decohesion is considered to occur as soon as the strain energy equals the energy of the newly formed surfaces. We employ the nudged elastic band (NEB) method to show that this size effect is opposed by a size-dependency of the energy barriers involved in the transition. Further, we find that the transition occurs via a localization of bond strain in the vicinity of the cleavage plane, which resembles the strain localization at the tip of a sharp crack that is predicted by linear elastic fracture mechanics. (paper)

  9. The relationship between 19th century BMIs and family size: Economies of scale and positive externalities.

    Science.gov (United States)

    Carson, Scott Alan

    2015-04-01

    The use of body mass index values (BMI) to measure living standards is now a well-accepted method in economics. Nevertheless, a neglected area in historical studies is the relationship between 19th century BMI and family size, and this relationship is documented here to be positive. Material inequality and BMI are the subject of considerable debate, and there was a positive relationship between BMI and wealth and an inverse relationship with inequality. After controlling for family size and wealth, BMI values were related with occupations, and farmers and laborers had greater BMI values than workers in other occupations. Copyright © 2014 Elsevier GmbH. All rights reserved.

  10. Scaling of the Urban Water Footprint: An Analysis of 65 Mid- to Large-Sized U.S. Metropolitan Areas

    Science.gov (United States)

    Mahjabin, T.; Garcia, S.; Grady, C.; Mejia, A.

    2017-12-01

    Scaling laws have been shown to be relevant to a range of disciplines including biology, ecology, hydrology, and physics, among others. Recently, scaling was shown to be important for understanding and characterizing cities. For instance, it was found that urban infrastructure (water supply pipes and electrical wires) tends to scale sublinearly with city population, implying that large cities are more efficient. In this study, we explore the scaling of the water footprint of cities. The water footprint is a measure of water appropriation that considers both the direct and indirect (virtual) water use of a consumer or producer. Here we compute the water footprint of 65 mid- to large-sized U.S. metropolitan areas, accounting for direct and indirect water uses associated with agricultural and industrial commodities, and residential and commercial water uses. We find that the urban water footprint, computed as the sum of the water footprint of consumption and production, exhibits sublinear scaling with an exponent of 0.89. This suggests the possibility of large cities being more water-efficient than small ones. To further assess this result, we conduct additional analysis by accounting for international flows, and the effects of green water and city boundary definition on the scaling. The analysis confirms the scaling and provides additional insight about its interpretation.

  11. The art of being small : brain-body size scaling in minute parasitic wasps

    NARCIS (Netherlands)

    Woude, van der Emma

    2017-01-01

    Haller’s rule states that small animals have relatively larger brains than large animals. This brain-body size relationship may enable small animals to maintain similar levels of brain performance as large animals. However, it also causes small animals to spend an exceptionally large proportion

  12. Size Scaling and Bursting Activity in Thermally Activated Breakdown of Fiber Bundles

    KAUST Repository

    Yoshioka, Naoki; Kun, Ferenc; Ito, Nobuyasu

    2008-01-01

    .e., the average lifetime tf decreases as a power law of the system size tf ∼L-z, where the exponent z depends on the external load σ and on the temperature T in the form z∼f(σ/T3/2). We propose a modified form of the Arrhenius law which provides a comprehensive

  13. Turnover of intra- and extra-aggregate organic matter at the silt-size scale

    Science.gov (United States)

    I. Virto; C. Moni; C. Swanston; C. Chenu

    2010-01-01

    Temperate silty soils are especially sensitive to organic matter losses associated to some agricultural management systems. Long-term preservation of organic C in these soils has been demonstrated to occur mainly in the silt- and clay-size fractions, although our knowledge about the mechanisms through which it happens remains unclear. Although organic matter in such...

  14. Optimal Size for Utilities? Returns to Scale in Water: Evidence from Benchmarking

    OpenAIRE

    Nicola Tynan; Bill Kingdom

    2005-01-01

    Using data from 270 water and sanitation providers, this Note investigates the relationship between a utility's size and its operating costs. The current trend toward transferring responsibility for providing services to the municipal level is driven in part by the assumption that this will make providers more responsive to customers' needs. But findings reported here suggest that smaller ...

  15. Patch size has no effect on insect visitation rate per unit area in garden-scale flower patches

    Science.gov (United States)

    Garbuzov, Mihail; Madsen, Andy; Ratnieks, Francis L. W.

    2015-01-01

    Previous studies investigating the effect of flower patch size on insect flower visitation rate have compared relatively large patches (10-1000s m2) and have generally found a negative relationship per unit area or per flower. Here, we investigate the effects of patch size on insect visitation in patches of smaller area (range c. 0.1-3.1 m2), which are of particular relevance to ornamental flower beds in parks and gardens. We studied two common garden plant species in full bloom with 6 patch sizes each: borage (Borago officinalis) and lavender (Lavandula × intermedia 'Grosso'). We quantified flower visitation by insects by making repeated counts of the insects foraging at each patch. On borage, all insects were honey bees (Apis mellifera, n = 5506 counts). On lavender, insects (n = 737 counts) were bumble bees (Bombus spp., 76.9%), flies (Diptera, 22.4%), and butterflies (Lepidoptera, 0.7%). On both plant species we found positive linear effects of patch size on insect numbers. However, there was no effect of patch size on the number of insects per unit area or per flower and, on lavender, for all insects combined or only bumble bees. The results show that it is possible to make unbiased comparisons of the attractiveness of plant species or varieties to flower-visiting insects using patches of different size within the small scale range studied and make possible projects aimed at comparing ornamental plant varieties using existing garden flower patches of variable area.

  16. Size Scaling in Western North Atlantic Loggerhead Turtles Permits Extrapolation between Regions, but Not Life Stages.

    Science.gov (United States)

    Marn, Nina; Klanjscek, Tin; Stokes, Lesley; Jusup, Marko

    2015-01-01

    Sea turtles face threats globally and are protected by national and international laws. Allometry and scaling models greatly aid sea turtle conservation and research, and help to better understand the biology of sea turtles. Scaling, however, may differ between regions and/or life stages. We analyze differences between (i) two different regional subsets and (ii) three different life stage subsets of the western North Atlantic loggerhead turtles by comparing the relative growth of body width and depth in relation to body length, and discuss the implications. Results suggest that the differences between scaling relationships of different regional subsets are negligible, and models fitted on data from one region of the western North Atlantic can safely be used on data for the same life stage from another North Atlantic region. On the other hand, using models fitted on data for one life stage to describe other life stages is not recommended if accuracy is of paramount importance. In particular, young loggerhead turtles that have not recruited to neritic habitats should be studied and modeled separately whenever practical, while neritic juveniles and adults can be modeled together as one group. Even though morphometric scaling varies among life stages, a common model for all life stages can be used as a general description of scaling, and assuming isometric growth as a simplification is justified. In addition to linear models traditionally used for scaling on log-log axes, we test the performance of a saturating (curvilinear) model. The saturating model is statistically preferred in some cases, but the accuracy gained by the saturating model is marginal.

  17. Size Scaling in Western North Atlantic Loggerhead Turtles Permits Extrapolation between Regions, but Not Life Stages.

    Directory of Open Access Journals (Sweden)

    Nina Marn

    Full Text Available Sea turtles face threats globally and are protected by national and international laws. Allometry and scaling models greatly aid sea turtle conservation and research, and help to better understand the biology of sea turtles. Scaling, however, may differ between regions and/or life stages. We analyze differences between (i two different regional subsets and (ii three different life stage subsets of the western North Atlantic loggerhead turtles by comparing the relative growth of body width and depth in relation to body length, and discuss the implications.Results suggest that the differences between scaling relationships of different regional subsets are negligible, and models fitted on data from one region of the western North Atlantic can safely be used on data for the same life stage from another North Atlantic region. On the other hand, using models fitted on data for one life stage to describe other life stages is not recommended if accuracy is of paramount importance. In particular, young loggerhead turtles that have not recruited to neritic habitats should be studied and modeled separately whenever practical, while neritic juveniles and adults can be modeled together as one group. Even though morphometric scaling varies among life stages, a common model for all life stages can be used as a general description of scaling, and assuming isometric growth as a simplification is justified. In addition to linear models traditionally used for scaling on log-log axes, we test the performance of a saturating (curvilinear model. The saturating model is statistically preferred in some cases, but the accuracy gained by the saturating model is marginal.

  18. Optimal unit sizing for small-scale integrated energy systems using multi-objective interval optimization and evidential reasoning approach

    International Nuclear Information System (INIS)

    Wei, F.; Wu, Q.H.; Jing, Z.X.; Chen, J.J.; Zhou, X.X.

    2016-01-01

    This paper proposes a comprehensive framework including a multi-objective interval optimization model and evidential reasoning (ER) approach to solve the unit sizing problem of small-scale integrated energy systems, with uncertain wind and solar energies integrated. In the multi-objective interval optimization model, interval variables are introduced to tackle the uncertainties of the optimization problem. Aiming at simultaneously considering the cost and risk of a business investment, the average and deviation of life cycle cost (LCC) of the integrated energy system are formulated. In order to solve the problem, a novel multi-objective optimization algorithm, MGSOACC (multi-objective group search optimizer with adaptive covariance matrix and chaotic search), is developed, employing adaptive covariance matrix to make the search strategy adaptive and applying chaotic search to maintain the diversity of group. Furthermore, ER approach is applied to deal with multiple interests of an investor at the business decision making stage and to determine the final unit sizing solution from the Pareto-optimal solutions. This paper reports on the simulation results obtained using a small-scale direct district heating system (DH) and a small-scale district heating and cooling system (DHC) optimized by the proposed framework. The results demonstrate the superiority of the multi-objective interval optimization model and ER approach in tackling the unit sizing problem of integrated energy systems considering the integration of uncertian wind and solar energies. - Highlights: • Cost and risk of investment in small-scale integrated energy systems are considered. • A multi-objective interval optimization model is presented. • A novel multi-objective optimization algorithm (MGSOACC) is proposed. • The evidential reasoning (ER) approach is used to obtain the final optimal solution. • The MGSOACC and ER can tackle the unit sizing problem efficiently.

  19. Analytical realization of finite-size scaling for Anderson localization. Does the band of critical states exist for d > 2?

    International Nuclear Information System (INIS)

    Suslov, I. M.

    2006-01-01

    An analytical realization is suggested for the finite-size scaling algorithm based on the consideration of auxiliary quasi-1D systems. Comparison of the obtained analytical results with the results of numerical calculations indicates that the Anderson transition point splits into the band of critical states. This conclusion is supported by direct numerical evidence (Edwards, Thouless, 1972; Last, Thouless, 1974; Schreiber, 1985). The possibility of restoring the conventional picture still exists but requires a radical reinterpretation of the raw numerical data

  20. Machine Shop Grinding Machines.

    Science.gov (United States)

    Dunn, James

    This curriculum manual is one in a series of machine shop curriculum manuals intended for use in full-time secondary and postsecondary classes, as well as part-time adult classes. The curriculum can also be adapted to open-entry, open-exit programs. Its purpose is to equip students with basic knowledge and skills that will enable them to enter the…

  1. Extrapolating population size from the occupancy-abundance relationship and the scaling pattern of occupancy

    DEFF Research Database (Denmark)

    Hui, Cang; McGeoch, Melodie A.; Reyers, Belinda

    2009-01-01

    estimated as occurring in South Africa, Lesotho, and Swaziland. SPO models outperformed the OAR models, due to OAR models assuming environmental homogeneity and yielding scale-dependent estimates. Therefore, OAR models should only be applied across small, homogenous areas. By contrast, SPO models...

  2. A simulation study provided sample size guidance for differential item functioning (DIF) studies using short scales

    DEFF Research Database (Denmark)

    Scott, Neil W.; Fayers, Peter M.; Bottomley, Andrew

    2009-01-01

    Differential item functioning (DIF) analyses are increasingly used to evaluate health-related quality of life (HRQoL) instruments, which often include relatively short subscales. Computer simulations were used to explore how various factors including scale length affect analysis of DIF by ordinal...... logistic regression....

  3. Prediction of spatially variable unsaturated hydraulic conductivity using scaled particle-size distribution functions

    NARCIS (Netherlands)

    Nasta, P.; Romano, N.; Assouline, S; Vrugt, J.A.; Hopmans, J.W.

    2013-01-01

    Simultaneous scaling of soil water retention and hydraulic conductivity functions provides an effective means to characterize the heterogeneity and spatial variability of soil hydraulic properties in a given study area. The statistical significance of this approach largely depends on the number of

  4. A comparison of machine learning algorithms for chemical toxicity classification using a simulated multi-scale data model

    Directory of Open Access Journals (Sweden)

    Li Zhen

    2008-05-01

    Full Text Available Abstract Background Bioactivity profiling using high-throughput in vitro assays can reduce the cost and time required for toxicological screening of environmental chemicals and can also reduce the need for animal testing. Several public efforts are aimed at discovering patterns or classifiers in high-dimensional bioactivity space that predict tissue, organ or whole animal toxicological endpoints. Supervised machine learning is a powerful approach to discover combinatorial relationships in complex in vitro/in vivo datasets. We present a novel model to simulate complex chemical-toxicology data sets and use this model to evaluate the relative performance of different machine learning (ML methods. Results The classification performance of Artificial Neural Networks (ANN, K-Nearest Neighbors (KNN, Linear Discriminant Analysis (LDA, Naïve Bayes (NB, Recursive Partitioning and Regression Trees (RPART, and Support Vector Machines (SVM in the presence and absence of filter-based feature selection was analyzed using K-way cross-validation testing and independent validation on simulated in vitro assay data sets with varying levels of model complexity, number of irrelevant features and measurement noise. While the prediction accuracy of all ML methods decreased as non-causal (irrelevant features were added, some ML methods performed better than others. In the limit of using a large number of features, ANN and SVM were always in the top performing set of methods while RPART and KNN (k = 5 were always in the poorest performing set. The addition of measurement noise and irrelevant features decreased the classification accuracy of all ML methods, with LDA suffering the greatest performance degradation. LDA performance is especially sensitive to the use of feature selection. Filter-based feature selection generally improved performance, most strikingly for LDA. Conclusion We have developed a novel simulation model to evaluate machine learning methods for the

  5. Size-Tuned Plastic Flow Localization in Irradiated Materials at the Submicron Scale

    Science.gov (United States)

    Cui, Yinan; Po, Giacomo; Ghoniem, Nasr

    2018-05-01

    Three-dimensional discrete dislocation dynamics (3D-DDD) simulations reveal that, with reduction of sample size in the submicron regime, the mechanism of plastic flow localization in irradiated materials transitions from irradiation-controlled to an intrinsic dislocation source controlled. Furthermore, the spatial correlation of plastic deformation decreases due to weaker dislocation interactions and less frequent cross slip as the system size decreases, thus manifesting itself in thinner dislocation channels. A simple model of discrete dislocation source activation coupled with cross slip channel widening is developed to reproduce and physically explain this transition. In order to quantify the phenomenon of plastic flow localization, we introduce a "deformation localization index," with implications to the design of radiation-resistant materials.

  6. Finite-size-scaling analysis of subsystem data in the dilute Ising model

    International Nuclear Information System (INIS)

    Hennecke, M.

    1993-01-01

    Monte Carlo simulation results for the magnetization of subsystems of finite lattices are used to determine the critical temperature and a critical exponent of the simple-cubic Ising model with quenched site dilution, at a concentration of p=40%. Particular attention is paid to the effect of the finite size of the systems from which the subsystem results are obtained. This finiteness of the lattices involved is shown to be a source of large deviations of critical temperatures and exponents estimated from subsystem data from their values in the thermodynamic limit. By the use of different lattice sizes, the results T c (40%)=1.209±0.002 and ν(40%)=0.78±0.01 could be extrapolated

  7. On the use of Cloud Computing and Machine Learning for Large-Scale SAR Science Data Processing and Quality Assessment Analysi

    Science.gov (United States)

    Hua, H.

    2016-12-01

    Geodetic imaging is revolutionizing geophysics, but the scope of discovery has been limited by labor-intensive technological implementation of the analyses. The Advanced Rapid Imaging and Analysis (ARIA) project has proven capability to automate SAR data processing and analysis. Existing and upcoming SAR missions such as Sentinel-1A/B and NISAR are also expected to generate massive amounts of SAR data. This has brought to the forefront the need for analytical tools for SAR quality assessment (QA) on the large volumes of SAR data-a critical step before higher-level time series and velocity products can be reliably generated. Initially leveraging an advanced hybrid-cloud computing science data system for performing large-scale processing, machine learning approaches were augmented for automated analysis of various quality metrics. Machine learning-based user-training of features, cross-validation, prediction models were integrated into our cloud-based science data processing flow to enable large-scale and high-throughput QA analytics for enabling improvements to the production quality of geodetic data products.

  8. Large-scale machine learning of media outlets for understanding public reactions to nation-wide viral infection outbreaks.

    Science.gov (United States)

    Choi, Sungwoon; Lee, Jangho; Kang, Min-Gyu; Min, Hyeyoung; Chang, Yoon-Seok; Yoon, Sungroh

    2017-10-01

    From May to July 2015, there was a nation-wide outbreak of Middle East respiratory syndrome (MERS) in Korea. MERS is caused by MERS-CoV, an enveloped, positive-sense, single-stranded RNA virus belonging to the family Coronaviridae. Despite expert opinions that the danger of MERS might be exaggerated, there was an overreaction by the public according to the Korean mass media, which led to a noticeable reduction in social and economic activities during the outbreak. To explain this phenomenon, we presumed that machine learning-based analysis of media outlets would be helpful and collected a number of Korean mass media articles and short-text comments produced during the 10-week outbreak. To process and analyze the collected data (over 86 million words in total) effectively, we created a methodology composed of machine-learning and information-theoretic approaches. Our proposal included techniques for extracting emotions from emoticons and Internet slang, which allowed us to significantly (approximately 73%) increase the number of emotion-bearing texts needed for robust sentiment analysis of social media. As a result, we discovered a plausible explanation for the public overreaction to MERS in terms of the interplay between the disease, mass media, and public emotions. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Constraint on the post-Newtonian parameter γ on galactic size scales

    International Nuclear Information System (INIS)

    Bolton, Adam S.; Rappaport, Saul; Burles, Scott

    2006-01-01

    We constrain the post-Newtonian gravity parameter γ on kiloparsec scales by comparing the masses of 15 elliptical lensing galaxies from the Sloan Lens ACS Survey as determined in two independent ways. The first method assumes only that Newtonian gravity is correct and is independent of γ, while the second uses gravitational lensing which depends on γ. More specifically, we combine Einstein radii and radial surface-brightness gradient measurements of the lens galaxies with empirical distributions for the mass concentration and velocity anisotropy of elliptical galaxies in the local universe to predict γ-dependent probability distributions for the lens-galaxy velocity dispersions. By comparing with observed velocity dispersions, we derive a maximum-likelihood value of γ=0.98±0.07 (68% confidence). This result is in excellent agreement with the prediction of general relativity that γ=1, which has previously been verified to this accuracy only on solar-system length scales

  10. Self-consistent field theory based molecular dynamics with linear system-size scaling

    Energy Technology Data Exchange (ETDEWEB)

    Richters, Dorothee [Institute of Mathematics and Center for Computational Sciences, Johannes Gutenberg University Mainz, Staudinger Weg 9, D-55128 Mainz (Germany); Kühne, Thomas D., E-mail: kuehne@uni-mainz.de [Institute of Physical Chemistry and Center for Computational Sciences, Johannes Gutenberg University Mainz, Staudinger Weg 7, D-55128 Mainz (Germany); Technical and Macromolecular Chemistry, University of Paderborn, Warburger Str. 100, D-33098 Paderborn (Germany)

    2014-04-07

    We present an improved field-theoretic approach to the grand-canonical potential suitable for linear scaling molecular dynamics simulations using forces from self-consistent electronic structure calculations. It is based on an exact decomposition of the grand canonical potential for independent fermions and does neither rely on the ability to localize the orbitals nor that the Hamilton operator is well-conditioned. Hence, this scheme enables highly accurate all-electron linear scaling calculations even for metallic systems. The inherent energy drift of Born-Oppenheimer molecular dynamics simulations, arising from an incomplete convergence of the self-consistent field cycle, is circumvented by means of a properly modified Langevin equation. The predictive power of the present approach is illustrated using the example of liquid methane under extreme conditions.

  11. Scale invariance, killing vectors, and the size of the fifth dimension

    International Nuclear Information System (INIS)

    Ross, D.K.

    1986-01-01

    An analysis is made of the classical five-dimensional sourceless Kaluza-Klein equations with the existence of the usual α/α/PSI/ Killing vector not assumed, where /PSI/ is the coordinate of the fifth dimension. The physical distance around the fifth dimension D 5 , needed for the calculation of the fine structure constant α, is not calculable in the usual theory because the equations have a global scale invariance. In the present case, the Killing vector and the global scale invariance are not present, but it is found rather generally that D 5 = 0. This indicates that quantum gravity is a necessary ingredient if α is to be calculated. It also provides an alternate explanation of why the universe appears four-dimensional

  12. Size matters: the ethical, legal, and social issues surrounding large-scale genetic biobank initiatives

    Directory of Open Access Journals (Sweden)

    Klaus Lindgaard Hoeyer

    2012-04-01

    Full Text Available During the past ten years the complex ethical, legal and social issues (ELSI typically surrounding large-scale genetic biobank research initiatives have been intensely debated in academic circles. In many ways genetic epidemiology has undergone a set of changes resembling what in physics has been called a transition into Big Science. This article outlines consequences of this transition and suggests that the change in scale implies challenges to the roles of scientists and public alike. An overview of key issues is presented, and it is argued that biobanks represent not just scientific endeavors with purely epistemic objectives, but also political projects with social implications. As such, they demand clever maneuvering among social interests to succeed.

  13. Finite-size scaling functions for directed polymers confined between attracting walls

    Energy Technology Data Exchange (ETDEWEB)

    Owczarek, A L [Department of Mathematics and Statistics, University of Melbourne, Parkville, Victoria 3052 (Australia); Prellberg, T [School of Mathematical Sciences, Queen Mary, University of London, Mile End Road, London E1 4NS (United Kingdom); Rechnitzer, A [Department of Mathematics, University of British Columbia, Vancouver, BC V6T 1Z2 (Canada)

    2008-01-25

    The exact solution of directed self-avoiding walks confined to a slit of finite width and interacting with the walls of the slit via an attractive potential has been recently calculated. The walks can be considered to model the polymer-induced steric stabilization and sensitized flocculation of colloidal dispersions. The large-width asymptotics led to a phase diagram different to that of a polymer attached to, and attracted to, a single wall. The question that arises is: Can one interpolate between the single wall and two wall cases? In this paper, we calculate the exact scaling functions for the partition function by considering the two variable asymptotics of the partition function for simultaneous large length and large width. Consequently, we find the scaling functions for the force induced by the polymer on the walls. We find that these scaling functions are given by elliptic {theta} functions. In some parts of the phase diagram there is more a complex crossover between the single wall and two wall cases and we elucidate how this happens.

  14. A multi-scale PDMS fabrication strategy to bridge the size mismatch between integrated circuits and microfluidics.

    Science.gov (United States)

    Muluneh, Melaku; Issadore, David

    2014-12-07

    In recent years there has been great progress harnessing the small-feature size and programmability of integrated circuits (ICs) for biological applications, by building microfluidics directly on top of ICs. However, a major hurdle to the further development of this technology is the inherent size-mismatch between ICs (~mm) and microfluidic chips (~cm). Increasing the area of the ICs to match the size of the microfluidic chip, as has often been done in previous studies, leads to a waste of valuable space on the IC and an increase in fabrication cost (>100×). To address this challenge, we have developed a three dimensional PDMS chip that can straddle multiple length scales of hybrid IC/microfluidic chips. This approach allows millimeter-scale ICs, with no post-processing, to be integrated into a centimeter-sized PDMS chip. To fabricate this PDMS chip we use a combination of soft-lithography and laser micromachining. Soft lithography was used to define micrometer-scale fluid channels directly on the surface of the IC, allowing fluid to be controlled with high accuracy and brought into close proximity to sensors for highly sensitive measurements. Laser micromachining was used to create ~50 μm vias to connect these molded PDMS channels to a larger PDMS chip, which can connect multiple ICs and house fluid connections to the outside world. To demonstrate the utility of this approach, we built and demonstrated an in-flow magnetic cytometer that consisted of a 5 × 5 cm(2) microfluidic chip that incorporated a commercial 565 × 1145 μm(2) IC with a GMR sensing circuit. We additionally demonstrated the modularity of this approach by building a chip that incorporated two of these GMR chips connected in series.

  15. Effect of training data size and noise level on support vector machines virtual screening of genotoxic compounds from large compound libraries.

    Science.gov (United States)

    Kumar, Pankaj; Ma, Xiaohua; Liu, Xianghui; Jia, Jia; Bucong, Han; Xue, Ying; Li, Ze Rong; Yang, Sheng Yong; Wei, Yu Quan; Chen, Yu Zong

    2011-05-01

    Various in vitro and in-silico methods have been used for drug genotoxicity tests, which show limited genotoxicity (GT+) and non-genotoxicity (GT-) identification rates. New methods and combinatorial approaches have been explored for enhanced collective identification capability. The rates of in-silco methods may be further improved by significantly diversified training data enriched by the large number of recently reported GT+ and GT- compounds, but a major concern is the increased noise levels arising from high false-positive rates of in vitro data. In this work, we evaluated the effect of training data size and noise level on the performance of support vector machines (SVM) method known to tolerate high noise levels in training data. Two SVMs of different diversity/noise levels were developed and tested. H-SVM trained by higher diversity higher noise data (GT+ in any in vivo or in vitro test) outperforms L-SVM trained by lower noise lower diversity data (GT+ in in vivo or Ames test only). H-SVM trained by 4,763 GT+ compounds reported before 2008 and 8,232 GT- compounds excluding clinical trial drugs correctly identified 81.6% of the 38 GT+ compounds reported since 2008, predicted 83.1% of the 2,008 clinical trial drugs as GT-, and 23.96% of 168 K MDDR and 27.23% of 17.86M PubChem compounds as GT+. These are comparable to the 43.1-51.9% GT+ and 75-93% GT- rates of existing in-silico methods, 58.8% GT+ and 79% GT- rates of Ames method, and the estimated percentages of 23% in vivo and 31-33% in vitro GT+ compounds in the "universe of chemicals". There is a substantial level of agreement between H-SVM and L-SVM predicted GT+ and GT- MDDR compounds and the prediction from TOPKAT. SVM showed good potential in identifying GT+ compounds from large compound libraries based on higher diversity and higher noise training data.

  16. Diffraction-based analysis of tunnel size for a scaled external occulter testbed

    Science.gov (United States)

    Sirbu, Dan; Kasdin, N. Jeremy; Vanderbei, Robert J.

    2016-07-01

    For performance verification of an external occulter mask (also called a starshade), scaled testbeds have been developed to measure the suppression of the occulter shadow in the pupil plane and contrast in the image plane. For occulter experiments the scaling is typically performed by maintaining an equivalent Fresnel number. The original Princeton occulter testbed was oversized with respect to both input beam and shadow propagation to limit any diffraction effects due to finite testbed enclosure edges; however, to operate at realistic space-mission equivalent Fresnel numbers an extended testbed is currently under construction. With the longer propagation distances involved, diffraction effects due to the edge of the tunnel must now be considered in the experiment design. Here, we present a diffraction-based model of two separate tunnel effects. First, we consider the effect of tunnel-edge induced diffraction ringing upstream from the occulter mask. Second, we consider the diffraction effect due to clipping of the output shadow by the tunnel downstream from the occulter mask. These calculations are performed for a representative point design relevant to the new Princeton occulter experiment, but we also present an analytical relation that can be used for other propagation distances.

  17. A triple-scale crystal plasticity modeling and simulation on size effect due to fine-graining

    International Nuclear Information System (INIS)

    Kurosawa, Eisuke; Aoyagi, Yoshiteru; Tadano, Yuichi; Shizawa, Kazuyuki

    2010-01-01

    In this paper, a triple-scale crystal plasticity model bridging three hierarchical material structures, i.e., dislocation structure, grain aggregate and practical macroscopic structure is developed. Geometrically necessary (GN) dislocation density and GN incompatibility are employed so as to describe isolated dislocations and dislocation pairs in a grain, respectively. Then the homogenization method is introduced into the GN dislocation-crystal plasticity model for derivation of the governing equation of macroscopic structure with the mathematical and physical consistencies. Using the present model, a triple-scale FE simulation bridging the above three hierarchical structures is carried out for f.c.c. polycrystals with different mean grain size. It is shown that the present model can qualitatively reproduce size effects of macroscopic specimen with ultrafine-grain, i.e., the increase of initial yield stress, the decrease of hardening ratio after reaching tensile strength and the reduction of tensile ductility with decrease of its grain size. Moreover, the relationship between macroscopic yielding of specimen and microscopic grain yielding is discussed and the mechanism of the poor tensile ductility due to fine-graining is clarified. (author)

  18. Toward industrial scale synthesis of ultrapure singlet nanoparticles with controllable sizes in a continuous gas-phase process

    Science.gov (United States)

    Feng, Jicheng; Biskos, George; Schmidt-Ott, Andreas

    2015-10-01

    Continuous gas-phase synthesis of nanoparticles is associated with rapid agglomeration, which can be a limiting factor for numerous applications. In this report, we challenge this paradigm by providing experimental evidence to support that gas-phase methods can be used to produce ultrapure non-agglomerated “singlet” nanoparticles having tunable sizes at room temperature. By controlling the temperature in the particle growth zone to guarantee complete coalescence of colliding entities, the size of singlets in principle can be regulated from that of single atoms to any desired value. We assess our results in the context of a simple analytical model to explore the dependence of singlet size on the operating conditions. Agreement of the model with experimental measurements shows that these methods can be effectively used for producing singlets that can be processed further by many alternative approaches. Combined with the capabilities of up-scaling and unlimited mixing that spark ablation enables, this study provides an easy-to-use concept for producing the key building blocks for low-cost industrial-scale nanofabrication of advanced materials.

  19. Mechanobiological induction of long-range contractility by diffusing biomolecules and size scaling in cell assemblies

    Science.gov (United States)

    Dasbiswas, K.; Alster, E.; Safran, S. A.

    2016-06-01

    Mechanobiological studies of cell assemblies have generally focused on cells that are, in principle, identical. Here we predict theoretically the effect on cells in culture of locally introduced biochemical signals that diffuse and locally induce cytoskeletal contractility which is initially small. In steady-state, both the concentration profile of the signaling molecule as well as the contractility profile of the cell assembly are inhomogeneous, with a characteristic length that can be of the order of the system size. The long-range nature of this state originates in the elastic interactions of contractile cells (similar to long-range “macroscopic modes” in non-living elastic inclusions) and the non-linear diffusion of the signaling molecules, here termed mechanogens. We suggest model experiments on cell assemblies on substrates that can test the theory as a prelude to its applicability in embryo development where spatial gradients of morphogens initiate cellular development.

  20. Impact of scaling voltage and size on the performance of Side-contacted Field Effect Diode

    Science.gov (United States)

    Touchaei, Behnam Jafari; Manavizadeh, Negin

    2018-05-01

    Side-contacted Fild Effect Diode (S-FED), with low leakage current and high Ion/Ioff ratio, has been recently introduced to suppress short channel effects in nanoscale regime. The voltage and size scalability of S-FEDs and effects on the power consumption, propagation delay time, and power delay product have been studied in this article. The most attractive properties are related to channel length to channel thickness ratio in the S-FED which reduces in comparison with MOSFET significantly, while gates control over the channel improve and the off-state current reduces dramatically. This promising advantage is not only capable to improve important S-FED's characteristics such as subthreshold slope but also eliminate Latch-up and floating body effect.

  1. Randomized Algorithms for Scalable Machine Learning

    OpenAIRE

    Kleiner, Ariel Jacob

    2012-01-01

    Many existing procedures in machine learning and statistics are computationally intractable in the setting of large-scale data. As a result, the advent of rapidly increasing dataset sizes, which should be a boon yielding improved statistical performance, instead severely blunts the usefulness of a variety of existing inferential methods. In this work, we use randomness to ameliorate this lack of scalability by reducing complex, computationally difficult inferential problems to larger sets o...

  2. Fractal scaling of particle size distribution and relationships with topsoil properties affected by biological soil crusts.

    Directory of Open Access Journals (Sweden)

    Guang-Lei Gao

    Full Text Available BACKGROUND: Biological soil crusts are common components of desert ecosystem; they cover ground surface and interact with topsoil that contribute to desertification control and degraded land restoration in arid and semiarid regions. METHODOLOGY/PRINCIPAL FINDINGS: To distinguish the changes in topsoil affected by biological soil crusts, we compared topsoil properties across three types of successional biological soil crusts (algae, lichens, and mosses crust, as well as the referenced sandland in the Mu Us Desert, Northern China. Relationships between fractal dimensions of soil particle size distribution and selected soil properties were discussed as well. The results indicated that biological soil crusts had significant positive effects on soil physical structure (P<0.05; and soil organic carbon and nutrients showed an upward trend across the successional stages of biological soil crusts. Fractal dimensions ranged from 2.1477 to 2.3032, and significantly linear correlated with selected soil properties (R(2 = 0.494∼0.955, P<0.01. CONCLUSIONS/SIGNIFICANCE: Biological soil crusts cause an important increase in soil fertility, and are beneficial to sand fixation, although the process is rather slow. Fractal dimension proves to be a sensitive and useful index for quantifying changes in soil properties that additionally implies desertification. This study will be essential to provide a firm basis for future policy-making on optimal solutions regarding desertification control and assessment, as well as degraded ecosystem restoration in arid and semiarid regions.

  3. Optimal Siting and Sizing of Energy Storage System for Power Systems with Large-scale Wind Power Integration

    DEFF Research Database (Denmark)

    Zhao, Haoran; Wu, Qiuwei; Huang, Shaojun

    2015-01-01

    This paper proposes algorithms for optimal sitingand sizing of Energy Storage System (ESS) for the operationplanning of power systems with large scale wind power integration.The ESS in this study aims to mitigate the wind powerfluctuations during the interval between two rolling Economic......Dispatches (EDs) in order to maintain generation-load balance.The charging and discharging of ESS is optimized consideringoperation cost of conventional generators, capital cost of ESSand transmission losses. The statistics from simulated systemoperations are then coupled to the planning process to determinethe...

  4. Finite-size scaling of the entanglement entropy of the quantum Ising chain with homogeneous, periodically modulated and random couplings

    International Nuclear Information System (INIS)

    Iglói, Ferenc; Lin, Yu-Cheng

    2008-01-01

    Using free-fermionic techniques we study the entanglement entropy of a block of contiguous spins in a large finite quantum Ising chain in a transverse field, with couplings of different types: homogeneous, periodically modulated and random. We carry out a systematic study of finite-size effects at the quantum critical point, and evaluate subleading corrections both for open and for periodic boundary conditions. For a block corresponding to a half of a finite chain, the position of the maximum of the entropy as a function of the control parameter (e.g. the transverse field) can define the effective critical point in the finite sample. On the basis of homogeneous chains, we demonstrate that the scaling behavior of the entropy near the quantum phase transition is in agreement with the universality hypothesis, and calculate the shift of the effective critical point, which has different scaling behaviors for open and for periodic boundary conditions

  5. A measurement strategy and an error-compensation model for the on-machine laser measurement of large-scale free-form surfaces

    International Nuclear Information System (INIS)

    Li, Bin; Li, Feng; Liu, Hongqi; Cai, Hui; Mao, Xinyong; Peng, Fangyu

    2014-01-01

    This study presents a novel measurement strategy and an error-compensation model for the measurement of large-scale free-form surfaces in on-machine laser measurement systems. To improve the measurement accuracy, the effects of the scan depth, surface roughness, incident angle and azimuth angle on the measurement results were investigated experimentally, and a practical measurement strategy considering the position and orientation of the sensor is presented. Also, a semi-quantitative model based on geometrical optics is proposed to compensate for the measurement error associated with the incident angle. The normal vector of the measurement point is determined using a cross-curve method from the acquired surface data. Then, the azimuth angle and incident angle are calculated to inform the measurement strategy and error-compensation model, respectively. The measurement strategy and error-compensation model are verified through the measurement of a large propeller blade on a heavy machine tool in a factory environment. The results demonstrate that the strategy and the model are effective in increasing the measurement accuracy. (paper)

  6. Removal performance and water quality analysis of paper machine white water in a full-scale wastewater treatment plant.

    Science.gov (United States)

    Shi, Shuai; Wang, Can; Fang, Shuai; Jia, Minghao; Li, Xiaoguang

    2017-06-01

    Paper machine white water is generally characterized as a high concentration of suspended solids and organic matters. A combined physicochemical-biological and filtration process was used in the study for removing pollutants in the wastewater. The removal efficiency of the pollutant in physicochemical and biological process was evaluated, respectively. Furthermore, advanced technology was used to analyse the water quality before and after the process treatment. Experimental results showed that the removal efficiency of suspend solids (SS) of the system was above 99%, while the physicochemical treatment in the forepart of the system had achieved about 97%. The removal efficiency of chemical oxygen demand (COD) and colour had the similar trend after physicochemical treatment and were corresponding to the proportion of suspended and the near-colloidal organic matter in the wastewater. After biological treatment, the removal efficiency of COD and colour achieved were about 97% and 90%, respectively. Furthermore, molecular weight (MW) distribution analysis showed that after treatment low MW molecules (analysis showed that most humic-like substances were effectively removed during the treatment. The analyses of gas chromatography/mass spectrometry showed that the composition of organic matter in the wastewater was not complicated. Methylsiloxanes were the typical organic components in the raw wastewater and most of them were removed after treatment.

  7. Impact of size and sorption on degradation of trichloroethylene and polychlorinated biphenyls by nano-scale zerovalent iron

    Energy Technology Data Exchange (ETDEWEB)

    Petersen, Elijah J. [Material Measurement Laboratory, National Institute of Standards and Technology, Gaithersburg, MD 20899 (United States); Pinto, Roger A. [Department of Chemical Engineering, University of Michigan, Ann Arbor (United States); Shi, Xiangyang [State Key Laboratory for Modification of Chemical Fibers and Polymer Materials, Donghua University, Shanghai 201620 (China); College of Chemistry, Chemical Engineering and Biotechnology, Donghua University, Shanghai 201620 (China); Huang, Qingguo, E-mail: qhuang@uga.edu [Department of Crop and Soil Sciences, University of Georgia, Griffin, GA 30223 (United States)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer nZVIs were synthesized using a layer-by-layer or poly(acrylic acid) stabilization approach. Black-Right-Pointing-Pointer These nZVIs were used to degrade TCE and PCB. Black-Right-Pointing-Pointer nZVI coatings impacted reactivity by altering pollutants/particle interactions. Black-Right-Pointing-Pointer Smaller nZVI particle size led to greater reactivity. - Abstract: Nano-scale zerovalent iron (nZVI) has been studied in recent years for environmental remediation applications such as the degradation of chlorinated organic contaminants. To overcome limitations related to the transport of nZVI, it is becoming common to add a polymer stabilizer to limit aggregation and enhance the particle reactivity. Another method investigated to enhance particle reactivity has been to limit particle size through novel synthesis techniques. However, the relative impacts of particle size and interactions of the chemicals with the coatings are not yet well understood. The purpose of this study was to investigate the mechanisms of particle size and polymer coating or polyelectrolyte multilayer (PEM) synthesis conditions on degradation of two common chlorinated contaminants: trichloroethylene (TCE) and polychlorinated biphenyls (PCBs). This was accomplished using two different synthesis techniques, a layer-by-layer approach at different pH values or iron reduction in the presence of varying concentrations of poly(acrylic acid). nZVI produced by both techniques yielded higher degradation rates than a traditional approach. The mechanistic investigation indicated that hydrophobicity and sorption to the multilayer impacts the availability of the hydrophobic compound to the nZVI and that particle size also had a large role with smaller particles having stronger dechlorination rates.

  8. LHC Report: machine development

    CERN Multimedia

    Rogelio Tomás García for the LHC team

    2015-01-01

    Machine development weeks are carefully planned in the LHC operation schedule to optimise and further study the performance of the machine. The first machine development session of Run 2 ended on Saturday, 25 July. Despite various hiccoughs, it allowed the operators to make great strides towards improving the long-term performance of the LHC.   The main goals of this first machine development (MD) week were to determine the minimum beam-spot size at the interaction points given existing optics and collimation constraints; to test new beam instrumentation; to evaluate the effectiveness of performing part of the beam-squeezing process during the energy ramp; and to explore the limits on the number of protons per bunch arising from the electromagnetic interactions with the accelerator environment and the other beam. Unfortunately, a series of events reduced the machine availability for studies to about 50%. The most critical issue was the recurrent trip of a sextupolar corrector circuit –...

  9. Mapping Savanna Tree Species at Ecosystem Scales Using Support Vector Machine Classification and BRDF Correction on Airborne Hyperspectral and LiDAR Data

    Directory of Open Access Journals (Sweden)

    Gregory P. Asner

    2012-11-01

    Full Text Available Mapping the spatial distribution of plant species in savannas provides insight into the roles of competition, fire, herbivory, soils and climate in maintaining the biodiversity of these ecosystems. This study focuses on the challenges facing large-scale species mapping using a fusion of Light Detection and Ranging (LiDAR and hyperspectral imagery. Here we build upon previous work on airborne species detection by using a two-stage support vector machine (SVM classifier to first predict species from hyperspectral data at the pixel scale. Tree crowns are segmented from the lidar imagery such that crown-level information, such as maximum tree height, can then be combined with the pixel-level species probabilities to predict the species of each tree. An overall prediction accuracy of 76% was achieved for 15 species. We also show that bidirectional reflectance distribution (BRDF effects caused by anisotropic scattering properties of savanna vegetation can result in flight line artifacts evident in species probability maps, yet these can be largely mitigated by applying a semi-empirical BRDF model to the hyperspectral data. We find that confronting these three challenges—reflectance anisotropy, integration of pixel- and crown-level data, and crown delineation over large areas—enables species mapping at ecosystem scales for monitoring biodiversity and ecosystem function.

  10. An optimum city size? The scaling relationship for urban population and fine particulate (PM_2_._5) concentration

    International Nuclear Information System (INIS)

    Han, Lijian; Zhou, Weiqi; Pickett, Steward T.A.; Li, Weifeng; Li, Li

    2016-01-01

    We utilize the distribution of PM_2_._5 concentration and population in large cities at the global scale to illustrate the relationship between urbanization and urban air quality. We found: 1) The relationship varies greatly among continents and countries. Large cities in North America, Europe, and Latin America have better air quality than those in other continents, while those in China and India have the worst air quality. 2) The relationships between urban population size and PM_2_._5 concentration in large cities of different continents or countries were different. PM_2_._5 concentration in large cities in North America, Europe, and Latin America showed little fluctuation or a small increasing trend, but those in Africa and India represent a “U” type relationship and in China represent an inverse “U” type relationship. 3) The potential contribution of population to PM_2_._5 concentration was higher in the large cities in China and India, but lower in other large cities. - Highlights: • Urban population and PM_2_._5 concentration varies greatly among regions. • Urban population size increase does not always enhances PM_2_._5 concentration. • Population's potential contribution to PM_2_._5 concentration higher in China. - We utilize the distribution of PM_2_._5 concentration and population in large cities at the global scale to illustrate the relationship between urbanization and urban air quality.

  11. Observation of chorus waves by the Van Allen Probes: dependence on solar wind parameters and scale size

    Science.gov (United States)

    Aryan, H.; Sibeck, D. G.; Balikhin, M. A.; Agapitov, O. V.; Kletzing, C.

    2016-12-01

    Highly energetic electrons in the Earth's Van Allen radiation belts can cause serious damage to spacecraft electronic systems, and affect the atmospheric composition if they precipitate into the upper atmosphere. Whistler mode chorus waves have attracted significant attention in recent decades for their crucial role in the acceleration and loss of energetic electrons that ultimately change the dynamics of the radiation belts. The distribution of these waves in the inner magnetosphere is commonly presented as a function of geomagnetic activity. However, geomagnetic indices are non-specific parameters that are compiled from imperfectly covered ground based measurements. The present study uses wave data from the two Van Allen Probes to present the distribution of lower band chorus waves not only as functions of single geomagnetic index and solar wind parameters, but also as functions of combined parameters. Also the current study takes advantage of the unique equatorial orbit of the Van Allen Probes to estimate the average scale size of chorus wave packets, during close separations between the two spacecraft, as a function of radial distance, magnetic latitude, and geomagnetic activity respectively. Results show that the average scale size of chorus wave packets is approximately 1300 - 2300 km. The results also show that the inclusion of combined parameters can provide better representation of the chorus wave distributions in the inner magnetosphere, and therefore can further improve our knowledge of the acceleration and loss of radiation belt electrons.

  12. Bioresorbable scaffolds for bone tissue engineering: optimal design, fabrication, mechanical testing and scale-size effects analysis.

    Science.gov (United States)

    Coelho, Pedro G; Hollister, Scott J; Flanagan, Colleen L; Fernandes, Paulo R

    2015-03-01

    Bone scaffolds for tissue regeneration require an optimal trade-off between biological and mechanical criteria. Optimal designs may be obtained using topology optimization (homogenization approach) and prototypes produced using additive manufacturing techniques. However, the process from design to manufacture remains a research challenge and will be a requirement of FDA design controls to engineering scaffolds. This work investigates how the design to manufacture chain affects the reproducibility of complex optimized design characteristics in the manufactured product. The design and prototypes are analyzed taking into account the computational assumptions and the final mechanical properties determined through mechanical tests. The scaffold is an assembly of unit-cells, and thus scale size effects on the mechanical response considering finite periodicity are investigated and compared with the predictions from the homogenization method which assumes in the limit infinitely repeated unit cells. Results show that a limited number of unit-cells (3-5 repeated on a side) introduce some scale-effects but the discrepancies are below 10%. Higher discrepancies are found when comparing the experimental data to numerical simulations due to differences between the manufactured and designed scaffold feature shapes and sizes as well as micro-porosities introduced by the manufacturing process. However good regression correlations (R(2) > 0.85) were found between numerical and experimental values, with slopes close to 1 for 2 out of 3 designs. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  13. Multi-Scale Particle Size Distributions of Mars, Moon and Itokawa based on a time-maturation dependent fragmentation model

    Science.gov (United States)

    Charalambous, C. A.; Pike, W. T.

    2013-12-01

    We present the development of a soil evolution framework and multiscale modelling of the surface of Mars, Moon and Itokawa thus providing an atlas of extra-terrestrial Particle Size Distributions (PSD). These PSDs are profoundly based on a tailoring method which interconnects several datasets from different sites captured by the various missions. The final integrated product is then fully justified through a soil evolution analysis model mathematically constructed via fundamental physical principles (Charalambous, 2013). The construction of the PSD takes into account the macroscale fresh primary impacts and their products, the mesoscale distributions obtained by the in-situ data of surface missions (Golombek et al., 1997, 2012) and finally the microscopic scale distributions provided by Curiosity and Phoenix Lander (Pike, 2011). The distribution naturally extends at the magnitudinal scales at which current data does not exist due to the lack of scientific instruments capturing the populations at these data absent scales. The extension is based on the model distribution (Charalambous, 2013) which takes as parameters known values of material specific probabilities of fragmentation and grinding limits. Additionally, the establishment of a closed-form statistical distribution provides a quantitative description of the soil's structure. Consequently, reverse engineering of the model distribution allows the synthesis of soil that faithfully represents the particle population at the studied sites (Charalambous, 2011). Such representation essentially delivers a virtual soil environment to work with for numerous applications. A specific application demonstrated here will be the information that can directly be extracted for the successful drilling probability as a function of distance in an effort to aid the HP3 instrument of the 2016 Insight Mission to Mars. Pike, W. T., et al. "Quantification of the dry history of the Martian soil inferred from in situ microscopy

  14. Sterol synthesis and cell size distribution under oscillatory growth conditions in Saccharomyces cerevisiae scale-down cultivations.

    Science.gov (United States)

    Marbà-Ardébol, Anna-Maria; Bockisch, Anika; Neubauer, Peter; Junne, Stefan

    2018-02-01

    Physiological responses of yeast to oscillatory environments as they appear in the liquid phase in large-scale bioreactors have been the subject of past studies. So far, however, the impact on the sterol content and intracellular regulation remains to be investigated. Since oxygen is a cofactor in several reaction steps within sterol metabolism, changes in oxygen availability, as occurs in production-scale aerated bioreactors, might have an influence on the regulation and incorporation of free sterols into the cell lipid layer. Therefore, sterol and fatty acid synthesis in two- and three-compartment scale-down Saccharomyces cerevisiae cultivation were studied and compared with typical values obtained in homogeneous lab-scale cultivations. While cells were exposed to oscillating substrate and oxygen availability in the scale-down cultivations, growth was reduced and accumulation of carboxylic acids was increased. Sterol synthesis was elevated to ergosterol at the same time. The higher fluxes led to increased concentrations of esterified sterols. The cells thus seem to utilize the increased availability of precursors to fill their sterol reservoirs; however, this seems to be limited in the three-compartment reactor cultivation due to a prolonged exposure to oxygen limitation. Besides, a larger heterogeneity within the single-cell size distribution was observed under oscillatory growth conditions with three-dimensional holographic microscopy. Hence the impact of gradients is also observable at the morphological level. The consideration of such a single-cell-based analysis provides useful information about the homogeneity of responses among the population. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Sustainable machining

    CERN Document Server

    2017-01-01

    This book provides an overview on current sustainable machining. Its chapters cover the concept in economic, social and environmental dimensions. It provides the reader with proper ways to handle several pollutants produced during the machining process. The book is useful on both undergraduate and postgraduate levels and it is of interest to all those working with manufacturing and machining technology.

  16. Effect of Machining Velocity in Nanoscale Machining Operations

    International Nuclear Information System (INIS)

    Islam, Sumaiya; Khondoker, Noman; Ibrahim, Raafat

    2015-01-01

    The aim of this study is to investigate the generated forces and deformations of single crystal Cu with (100), (110) and (111) crystallographic orientations at nanoscale machining operation. A nanoindenter equipped with nanoscratching attachment was used for machining operations and in-situ observation of a nano scale groove. As a machining parameter, the machining velocity was varied to measure the normal and cutting forces. At a fixed machining velocity, different levels of normal and cutting forces were generated due to different crystallographic orientations of the specimens. Moreover, after machining operation percentage of elastic recovery was measured and it was found that both the elastic and plastic deformations were responsible for producing a nano scale groove within the range of machining velocities from 250-1000 nm/s. (paper)

  17. Pore-scale simulations of drainage in granular materials: Finite size effects and the representative elementary volume

    Science.gov (United States)

    Yuan, Chao; Chareyre, Bruno; Darve, Félix

    2016-09-01

    A pore-scale model is introduced for two-phase flow in dense packings of polydisperse spheres. The model is developed as a component of a more general hydromechanical coupling framework based on the discrete element method, which will be elaborated in future papers and will apply to various processes of interest in soil science, in geomechanics and in oil and gas production. Here the emphasis is on the generation of a network of pores mapping the void space between spherical grains, and the definition of local criteria governing the primary drainage process. The pore space is decomposed by Regular Triangulation, from which a set of pores connected by throats are identified. A local entry capillary pressure is evaluated for each throat, based on the balance of capillary pressure and surface tension at equilibrium. The model reflects the possible entrapment of disconnected patches of the receding wetting phase. It is validated by a comparison with drainage experiments. In the last part of the paper, a series of simulations are reported to illustrate size and boundary effects, key questions when studying small samples made of spherical particles be it in simulations or experiments. Repeated tests on samples of different sizes give evolution of water content which are not only scattered but also strongly biased for small sample sizes. More than 20,000 spheres are needed to reduce the bias on saturation below 0.02. Additional statistics are generated by subsampling a large sample of 64,000 spheres. They suggest that the minimal sampling volume for evaluating saturation is one hundred times greater that the sampling volume needed for measuring porosity with the same accuracy. This requirement in terms of sample size induces a need for efficient computer codes. The method described herein has a low algorithmic complexity in order to satisfy this requirement. It will be well suited to further developments toward coupled flow-deformation problems in which evolution of the

  18. Positional dependence of scale size and shape in butterfly wings: wing-wide phenotypic coordination of color-pattern elements and background.

    Science.gov (United States)

    Kusaba, Kiseki; Otaki, Joji M

    2009-02-01

    Butterfly wing color-patterns are a phenotypically coordinated array of scales whose color is determined as cellular interpretation outputs for morphogenic signals. Here we investigated distribution patterns of scale shape and size in relation to position and coloration on the hindwings of a nymphalid butterfly Junonia orithya. Most scales had a smooth edge but scales at and near the natural and ectopic eyespot foci and in the postbasal area were jagged. Scale size decreased regularly from the postbasal to distal areas, and eyespots occasionally had larger scales than the background. Reasonable correlations were obtained between the eyespot size and focal scale size in females. Histological and real-time individual observations of the color-pattern developmental sequence showed that the background brown and blue colors expanded from the postbasal to distal areas independently from the color-pattern elements such as eyespots. These data suggest that morphogenic signals for coloration directly or indirectly influence the scale shape and size and that the blue "background" is organized by a long-range signal from an unidentified organizing center in J. orithya.

  19. The 1/3-scale aerodynamics performance test of helium compressor for GTHTR300 turbo machine of JAERI (step 1)

    International Nuclear Information System (INIS)

    Takada, Shoji; Takizuka, Takakazu; Kunitomi, Kazuhiko; Xing, Yan

    2003-01-01

    A program for research and development on aerodynamics in a helium gas compressor was planned for the power conversion system of the Gas Turbine High Temperature Reactor (GTHTR300). The three-dimensional aerodynamic design of the compressor achieved a high polytropic efficiency of 90%, keeping a sufficient surge margin over 30%. To validate the design of the helium gas compressor of GTHTR300, aerodynamic performance tests were planned, and a 1/3-scale, 4-stage compressor model was designed. In the tests, the performance data of the helium gas compressor model will be acquired by using helium gas as a working fluid. The maximum design pressure at the model inlet is 0.88 MPa, which allows the Reynolds number to be sufficiently high. The present study is entrusted from the Ministry of Education, Culture, Sports, Science and Technology of Japan. (author)

  20. Scaled photographs of surf over the full range of breaker sizes on the north shore of Oahu and Jaws, Maui, Hawaiian Islands (NODC Accession 0001753)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Digital surf photographs were scaled using surfers as height benchmarks to estimate the size of the breakers. Historical databases for surf height in Hawaii are...

  1. Estimating the size of the cavity and surrounding failed region for underground nuclear explosions from scaling rules

    Energy Technology Data Exchange (ETDEWEB)

    Rogers, Leo A [El Paso Natural Gas Company (United States)

    1970-05-01

    The fundamental physical principles involved in the formation of an underground cavity by a nuclear explosion and breakage of the rock surrounding the cavity are examined from the point of view of making preliminary estimates of their sizes where there is a limited understanding of the rock characteristics. Scaling equations for cavity formation based on adiabatic expansion are reviewed and further developed to include the strength of the material surrounding the shot point as well as the overburden above the shot point. The region of rock breakage or permanent distortion surround ing the explosion generated cavity is estimated using both the Von Mises and Coulomb-Mohr failure criteria. It is found that the ratio of the rock failure radius to the cavity radius for these two criteria becomes independent of yield and dependent only on the failure mechanics of the rock. The analytical solutions developed for the Coulomb-Mohr and Von Mises criteria are presented in graphical form. (author)

  2. An HTS machine laboratory prototype

    DEFF Research Database (Denmark)

    Mijatovic, Nenad; Jensen, Bogi Bech; Træholt, Chresten

    2012-01-01

    This paper describes Superwind HTS machine laboratory setup which is a small scale HTS machine designed and build as a part of the efforts to identify and tackle some of the challenges the HTS machine design may face. One of the challenges of HTS machines is a Torque Transfer Element (TTE) which...... conduction compared to a shaft. The HTS machine was successfully cooled to 77K and tests have been performed. The IV curves of the HTS field winding employing 6 HTS coils indicate that two of the coils had been damaged. The maximal value of the torque during experiments of 78Nm was recorded. Loaded with 33...

  3. Simulation of reaction diffusion processes over biologically relevant size and time scales using multi-GPU workstations.

    Science.gov (United States)

    Hallock, Michael J; Stone, John E; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida

    2014-05-01

    Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli . Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems.

  4. The critical behaviour of self-dual Z(N) spin systems - Finite size scaling and conformal invariance

    International Nuclear Information System (INIS)

    Alcaraz, F.C.

    1986-01-01

    Critical properties of a family of self-dual two dimensional Z(N) models whose bulk free energy is exacly known at the self-dual point are studied. The analysis is performed by studing the finite size behaviour of the corresponding one dimensional quantum Hamiltonians which also possess an exact solution at their self-dual point. By exploring finite size scaling ideas and the conformal invariance of the critical infinite system the critical temperature and critical exponents as well as the central charge associated with the underlying conformal algebra are calculated for N up to 8. The results strongly suggest that the recently constructed Z(N) quantum field theory of Zamolodchikov and Fateev (1985) is the underlying field theory associated with these statistical mechanical systems. It is also tested, for the Z(5) case, the conjecture that these models correspond to the bifurcation points, in the phase diagram of the general Z(N) spin model, where a massless phase originates. (Author) [pt

  5. Size-dependent giant-magnetoresistance in millimeter scale GaAs/AlGaAs 2D electron devices

    Science.gov (United States)

    Mani, R. G.

    2013-01-01

    Large changes in the electrical resistance induced by the application of a small magnetic field are potentially useful for device-applications. Such Giant Magneto-Resistance (GMR) effects also provide new insights into the physical phenomena involved in the associated electronic transport. This study examines a “bell-shape” negative GMR that grows in magnitude with decreasing temperatures in mm-wide devices fabricated from the high-mobility GaAs/AlGaAs 2-Dimensional Electron System (2DES). Experiments show that the span of this magnetoresistance on the magnetic-field-axis increases with decreasing device width, W, while there is no concurrent Hall resistance, Rxy, correction. A multi-conduction model, including negative diagonal-conductivity, and non-vanishing off-diagonal conductivity, reproduces experimental observations. The results suggest that a size effect in the mm-wide 2DES with mm-scale electron mean-free-paths is responsible for the observed “non-ohmic” size-dependent negative GMR. PMID:24067264

  6. Computational and Experimental Study of the Transient Transport Phenomena in a Full-Scale Twin-Roll Continuous Casting Machine

    Science.gov (United States)

    Xu, Mianguang; Li, Zhongyang; Wang, Zhaohui; Zhu, Miaoyong

    2017-02-01

    To gain a fundamental understanding of the transient fluid flow in twin-roll continuous casting, the current paper applies both large eddy simulation (LES) and full-scale water modeling experiments to investigate the characteristics of the top free surface, stirring effect of the roll rotation, boundary layer fluctuations, and backflow stability. The results show that, the characteristics of the top free surface and the flow field in the wedge-shaped pool region are quite different with/without the consideration of the roll rotation. The roll rotation decreases the instantaneous fluctuation range of the top free surface, but increases its horizontal velocity. The stirring effect of the roll rotating makes the flow field more homogenous and there exists clear shear flow on the rotating roll surface. The vortex shedding induced by the Kármán Vortex Street from the submerged entry nozzle (SEN) causes the "velocity magnitude wave" and strongly influences the boundary layer stability and the backflow stability. The boundary layer fluctuations or the "velocity magnitude wave" induced by the vortex shedding could give rise to the internal porosity. In strip continuous casting process, the vortex shedding phenomenon indicates that the laminar flow can give rise to instability and that it should be made important in the design of the feeding system and the setting of the operating parameters.

  7. Underestimation of Microearthquake Size by the Magnitude Scale of the Japan Meteorological Agency: Influence on Earthquake Statistics

    Science.gov (United States)

    Uchide, Takahiko; Imanishi, Kazutoshi

    2018-01-01

    Magnitude scales based on the amplitude of seismic waves, including the Japan Meteorological Agency magnitude scale (Mj), are commonly used in routine processes. The moment magnitude scale (Mw), however, is more physics based and is able to evaluate any type and size of earthquake. This paper addresses the relation between Mj and Mw for microearthquakes. The relative moment magnitudes among earthquakes are well constrained by multiple spectral ratio analyses. The results for the events in the Fukushima Hamadori and northern Ibaraki prefecture areas of Japan imply that Mj is significantly and systematically smaller than Mw for microearthquakes. The Mj-Mw curve has slopes of 1/2 and 1 for small and large values of Mj, respectively; for example, Mj = 1.0 corresponds to Mw = 2.0. A simple numerical simulation implies that this is due to anelastic attenuation and the recording using a finite sampling interval. The underestimation affects earthquake statistics. The completeness magnitude, Mc, for magnitudes lower than which the magnitude-frequency distribution deviates from the Gutenberg-Richter law, is effectively lower for Mw than that for Mj, by taking into account the systematic difference between Mj and Mw. The b values of the Gutenberg-Richter law are larger for Mw than for Mj. As the b values for Mj and Mw are well correlated, qualitative argument using b values is not affected. While the estimated b values for Mj are below 1.5, those for Mw often exceed 1.5. This may affect the physical implication of the seismicity.

  8. Machine learning topological states

    Science.gov (United States)

    Deng, Dong-Ling; Li, Xiaopeng; Das Sarma, S.

    2017-11-01

    Artificial neural networks and machine learning have now reached a new era after several decades of improvement where applications are to explode in many fields of science, industry, and technology. Here, we use artificial neural networks to study an intriguing phenomenon in quantum physics—the topological phases of matter. We find that certain topological states, either symmetry-protected or with intrinsic topological order, can be represented with classical artificial neural networks. This is demonstrated by using three concrete spin systems, the one-dimensional (1D) symmetry-protected topological cluster state and the 2D and 3D toric code states with intrinsic topological orders. For all three cases, we show rigorously that the topological ground states can be represented by short-range neural networks in an exact and efficient fashion—the required number of hidden neurons is as small as the number of physical spins and the number of parameters scales only linearly with the system size. For the 2D toric-code model, we find that the proposed short-range neural networks can describe the excited states with Abelian anyons and their nontrivial mutual statistics as well. In addition, by using reinforcement learning we show that neural networks are capable of finding the topological ground states of nonintegrable Hamiltonians with strong interactions and studying their topological phase transitions. Our results demonstrate explicitly the exceptional power of neural networks in describing topological quantum states, and at the same time provide valuable guidance to machine learning of topological phases in generic lattice models.

  9. Simple machines

    CERN Document Server

    Graybill, George

    2007-01-01

    Just how simple are simple machines? With our ready-to-use resource, they are simple to teach and easy to learn! Chocked full of information and activities, we begin with a look at force, motion and work, and examples of simple machines in daily life are given. With this background, we move on to different kinds of simple machines including: Levers, Inclined Planes, Wedges, Screws, Pulleys, and Wheels and Axles. An exploration of some compound machines follows, such as the can opener. Our resource is a real time-saver as all the reading passages, student activities are provided. Presented in s

  10. Pre-stressed piezoelectric bimorph micro-actuators based on machined 40 µm PZT thick films: batch scale fabrication and integration with MEMS

    International Nuclear Information System (INIS)

    Wilson, S A; Jourdain, R P; Owens, S

    2010-01-01

    The projected force–displacement capability of piezoelectric ceramic films in the 20–50 µm thickness range suggests that they are well suited to many micro-fluidic and micro-pneumatic applications. Furthermore when they are configured as bending actuators and operated at ∼ 1 V µm −1 they do not necessarily conform to the high-voltage, very low-displacement piezoelectric stereotype. Even so they are rarely found today in commercial micro-electromechanical devices, such as micro-pumps and micro-valves, and the main barriers to making them much more widely available would appear to be processing incompatibilities rather than commercial desirability. In particular, the issues associated with integration of these devices into MEMS at the production level are highly significant and they have perhaps received less attention in the mainstream than they deserve. This paper describes a fabrication route based on ultra-precision ceramic machining and full-wafer bonding for cost-effective batch scale production of thick film PZT bimorph micro-actuators and their integration with MEMS. The resulting actuators are pre-stressed (ceramic in compression) which gives them added performance, they are true bimorphs with bi-directional capability and they exhibit full bulk piezoelectric ceramic properties. The devices are designed to integrate with ancillary systems components using transfer-bonding techniques. The work forms part of the European Framework 6 Project 'Q2M—Quality to Micro'

  11. Face machines

    Energy Technology Data Exchange (ETDEWEB)

    Hindle, D.

    1999-06-01

    The article surveys latest equipment available from the world`s manufacturers of a range of machines for tunnelling. These are grouped under headings: excavators; impact hammers; road headers; and shields and tunnel boring machines. Products of thirty manufacturers are referred to. Addresses and fax numbers of companies are supplied. 5 tabs., 13 photos.

  12. Electric machine

    Science.gov (United States)

    El-Refaie, Ayman Mohamed Fawzi [Niskayuna, NY; Reddy, Patel Bhageerath [Madison, WI

    2012-07-17

    An interior permanent magnet electric machine is disclosed. The interior permanent magnet electric machine comprises a rotor comprising a plurality of radially placed magnets each having a proximal end and a distal end, wherein each magnet comprises a plurality of magnetic segments and at least one magnetic segment towards the distal end comprises a high resistivity magnetic material.

  13. Machine Learning.

    Science.gov (United States)

    Kirrane, Diane E.

    1990-01-01

    As scientists seek to develop machines that can "learn," that is, solve problems by imitating the human brain, a gold mine of information on the processes of human learning is being discovered, expert systems are being improved, and human-machine interactions are being enhanced. (SK)

  14. Nonplanar machines

    International Nuclear Information System (INIS)

    Ritson, D.

    1989-05-01

    This talk examines methods available to minimize, but never entirely eliminate, degradation of machine performance caused by terrain following. Breaking of planar machine symmetry for engineering convenience and/or monetary savings must be balanced against small performance degradation, and can only be decided on a case-by-case basis. 5 refs

  15. Sizing Up the Milky Way: A Bayesian Mixture Model Meta-analysis of Photometric Scale Length Measurements

    Science.gov (United States)

    Licquia, Timothy C.; Newman, Jeffrey A.

    2016-11-01

    The exponential scale length (L d ) of the Milky Way’s (MW’s) disk is a critical parameter for describing the global physical size of our Galaxy, important both for interpreting other Galactic measurements and helping us to understand how our Galaxy fits into extragalactic contexts. Unfortunately, current estimates span a wide range of values and are often statistically incompatible with one another. Here, we perform a Bayesian meta-analysis to determine an improved, aggregate estimate for L d , utilizing a mixture-model approach to account for the possibility that any one measurement has not properly accounted for all statistical or systematic errors. Within this machinery, we explore a variety of ways of modeling the nature of problematic measurements, and then employ a Bayesian model averaging technique to derive net posterior distributions that incorporate any model-selection uncertainty. Our meta-analysis combines 29 different (15 visible and 14 infrared) photometric measurements of L d available in the literature; these involve a broad assortment of observational data sets, MW models and assumptions, and methodologies, all tabulated herein. Analyzing the visible and infrared measurements separately yields estimates for L d of {2.71}-0.20+0.22 kpc and {2.51}-0.13+0.15 kpc, respectively, whereas considering them all combined yields 2.64 ± 0.13 kpc. The ratio between the visible and infrared scale lengths determined here is very similar to that measured in external spiral galaxies. We use these results to update the model of the Galactic disk from our previous work, constraining its stellar mass to be {4.8}-1.1+1.5× {10}10 M ⊙, and the MW’s total stellar mass to be {5.7}-1.1+1.5× {10}10 M ⊙.

  16. Stochastic scheduling on unrelated machines

    NARCIS (Netherlands)

    Skutella, Martin; Sviridenko, Maxim; Uetz, Marc Jochen

    2013-01-01

    Two important characteristics encountered in many real-world scheduling problems are heterogeneous machines/processors and a certain degree of uncertainty about the actual sizes of jobs. The first characteristic entails machine dependent processing times of jobs and is captured by the classical

  17. Machine Accounting. An Instructor's Guide.

    Science.gov (United States)

    Gould, E. Noah, Ed.

    Designed to prepare students to operate the types of accounting machines used in many medium-sized businesses, this instructor's guide presents a full-year high school course in machine accounting covering 120 hours of instruction. An introduction for the instructor suggests how to adapt the guide to present a 60-hour module which would be…

  18. Pore-Scale Investigation of Micron-Size Polyacrylamide Elastic Microspheres (MPEMs) Transport and Retention in Saturated Porous Media

    KAUST Repository

    Yao, Chuanjin

    2014-05-06

    Knowledge of micrometer-size polyacrylamide elastic microsphere (MPEM) transport and retention mechanisms in porous media is essential for the application of MPEMs as a smart sweep improvement and profile modification agent in improving oil recovery. A transparent micromodel packed with translucent quartz sand was constructed and used to investigate the pore-scale transport, surface deposition-release, and plugging deposition-remigration mechanisms of MPEMs in porous media. The results indicate that the combination of colloidal and hydrodynamic forces controls the deposition and release of MPEMs on pore-surfaces; the reduction of fluid salinity and the increase of Darcy velocity are beneficial to the MPEM release from pore-surfaces; the hydrodynamic forces also influence the remigration of MPEMs in pore-throats. MPEMs can plug pore-throats through the mechanisms of capture-plugging, superposition-plugging, and bridge-plugging, which produces resistance to water flow; the interception with MPEM particulate filters occurring in the interior of porous media can enhance the plugging effect of MPEMs; while the interception with MPEM particulate filters occurring at the surface of low-permeability layer can prevent the low-permeability layer from being damaged by MPEMs. MPEMs can remigrate in pore-throats depending on their elasticity through four steps of capture-plugging, elastic deformation, steady migration, and deformation recovery. © 2014 American Chemical Society.

  19. The Machine within the Machine

    CERN Multimedia

    Katarina Anthony

    2014-01-01

    Although Virtual Machines are widespread across CERN, you probably won't have heard of them unless you work for an experiment. Virtual machines - known as VMs - allow you to create a separate machine within your own, allowing you to run Linux on your Mac, or Windows on your Linux - whatever combination you need.   Using a CERN Virtual Machine, a Linux analysis software runs on a Macbook. When it comes to LHC data, one of the primary issues collaborations face is the diversity of computing environments among collaborators spread across the world. What if an institute cannot run the analysis software because they use different operating systems? "That's where the CernVM project comes in," says Gerardo Ganis, PH-SFT staff member and leader of the CernVM project. "We were able to respond to experimentalists' concerns by providing a virtual machine package that could be used to run experiment software. This way, no matter what hardware they have ...

  20. Machine translation

    Energy Technology Data Exchange (ETDEWEB)

    Nagao, M

    1982-04-01

    Each language has its own structure. In translating one language into another one, language attributes and grammatical interpretation must be defined in an unambiguous form. In order to parse a sentence, it is necessary to recognize its structure. A so-called context-free grammar can help in this respect for machine translation and machine-aided translation. Problems to be solved in studying machine translation are taken up in the paper, which discusses subjects for semantics and for syntactic analysis and translation software. 14 references.

  1. Machine-Learning Research

    OpenAIRE

    Dietterich, Thomas G.

    1997-01-01

    Machine-learning research has been making great progress in many directions. This article summarizes four of these directions and discusses some current open problems. The four directions are (1) the improvement of classification accuracy by learning ensembles of classifiers, (2) methods for scaling up supervised learning algorithms, (3) reinforcement learning, and (4) the learning of complex stochastic models.

  2. Dependence of some transmission factors on field size and treatment depth in external beam radiation therapy (EBRT) using the theratron equinox 100 cobalt 60 machine

    International Nuclear Information System (INIS)

    Odonkor, P.

    2015-07-01

    The use of beam modifiers in today’s radiotherapy is very important as it attenuates the beam and reduces the dose to the patient; therefore the need to know the amount of attenuation (in terms of a transmission factor) they provide during treatment. The purpose of this research work is to evaluate the variation (or dependence) of the transmission factors (TFs) of block tray and physical wedges (of different angles) as a function of treatment depth and field size using both iso-centric setups, SAD and SSD; and thus compare the results from the two setup techniques. Wedge and tray TF measurements were performed in a full scatter, large water phantom using a 0.04cc ionization chamber and an average photon energy of 1.25MV from a cobalt-60 unit at an SAD/SSD of 100cm at various depths and field sizes with gantry and collimator angles fixed at 0°. From the measurements carried out, the wedge TF of the 15°, 30°, 45°, and 60°, wedges were found to be 0.775±0.005, 0.650±0.010, 0.505±0.015, and 0.280±0.015 respectively; and the tray TF was found to be 0.960±0.003. Also, the results obtained showed that both the wedge TF and the tray TF has a strong linear dependence on treatment depth; however, the variation of the 15°, wedge TF and the tray TF with depth is less significant (less than 2%). Maximum percentage variation for the 15°, wedge for the SAD setup was 1.1% and 1.59% for the SSD setup; and that for the tray was 0.60% for the SAD setup and 0.12% for the SSD setup. Also, the variation of the 15°, 30°, and 45°, wedge TF with field size was less significant (less than 2%); and a weaker dependence was observed with field size as compared to the treatment depth. However, the 60°, wedge showed a significant variation (maximum of 2.22% and 2.88% for the SAD and SSD setups respectively) as an increase in field size was accompanied by an increase in its wedge TF. Also though the tray TF graphically showed a strong linear dependence on field size the

  3. Scales

    Science.gov (United States)

    Scales are a visible peeling or flaking of outer skin layers. These layers are called the stratum ... Scales may be caused by dry skin, certain inflammatory skin conditions, or infections. Examples of disorders that ...

  4. Machine Learning

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Machine learning, which builds on ideas in computer science, statistics, and optimization, focuses on developing algorithms to identify patterns and regularities in data, and using these learned patterns to make predictions on new observations. Boosted by its industrial and commercial applications, the field of machine learning is quickly evolving and expanding. Recent advances have seen great success in the realms of computer vision, natural language processing, and broadly in data science. Many of these techniques have already been applied in particle physics, for instance for particle identification, detector monitoring, and the optimization of computer resources. Modern machine learning approaches, such as deep learning, are only just beginning to be applied to the analysis of High Energy Physics data to approach more and more complex problems. These classes will review the framework behind machine learning and discuss recent developments in the field.

  5. Machine Translation

    Indian Academy of Sciences (India)

    Research Mt System Example: The 'Janus' Translating Phone Project. The Janus ... based on laptops, and simultaneous translation of two speakers in a dialogue. For more ..... The current focus in MT research is on using machine learning.

  6. Power Scaling of the Size Distribution of Economic Loss and Fatalities due to Hurricanes, Earthquakes, Tornadoes, and Floods in the USA

    Science.gov (United States)

    Tebbens, S. F.; Barton, C. C.; Scott, B. E.

    2016-12-01

    Traditionally, the size of natural disaster events such as hurricanes, earthquakes, tornadoes, and floods is measured in terms of wind speed (m/sec), energy released (ergs), or discharge (m3/sec) rather than by economic loss or fatalities. Economic loss and fatalities from natural disasters result from the intersection of the human infrastructure and population with the size of the natural event. This study investigates the size versus cumulative number distribution of individual natural disaster events for several disaster types in the United States. Economic losses are adjusted for inflation to 2014 USD. The cumulative number divided by the time over which the data ranges for each disaster type is the basis for making probabilistic forecasts in terms of the number of events greater than a given size per year and, its inverse, return time. Such forecasts are of interest to insurers/re-insurers, meteorologists, seismologists, government planners, and response agencies. Plots of size versus cumulative number distributions per year for economic loss and fatalities are well fit by power scaling functions of the form p(x) = Cx-β; where, p(x) is the cumulative number of events with size equal to and greater than size x, C is a constant, the activity level, x is the event size, and β is the scaling exponent. Economic loss and fatalities due to hurricanes, earthquakes, tornadoes, and floods are well fit by power functions over one to five orders of magnitude in size. Economic losses for hurricanes and tornadoes have greater scaling exponents, β = 1.1 and 0.9 respectively, whereas earthquakes and floods have smaller scaling exponents, β = 0.4 and 0.6 respectively. Fatalities for tornadoes and floods have greater scaling exponents, β = 1.5 and 1.7 respectively, whereas hurricanes and earthquakes have smaller scaling exponents, β = 0.4 and 0.7 respectively. The scaling exponents can be used to make probabilistic forecasts for time windows ranging from 1 to 1000 years

  7. Cell-size distribution and scaling in a one-dimensional Kolmogorov-Johnson-Mehl-Avrami lattice model with continuous nucleation

    Science.gov (United States)

    Néda, Zoltán; Járai-Szabó, Ferenc; Boda, Szilárd

    2017-10-01

    The Kolmogorov-Johnson-Mehl-Avrami (KJMA) growth model is considered on a one-dimensional (1D) lattice. Cells can grow with constant speed and continuously nucleate on the empty sites. We offer an alternative mean-field-like approach for describing theoretically the dynamics and derive an analytical cell-size distribution function. Our method reproduces the same scaling laws as the KJMA theory and has the advantage that it leads to a simple closed form for the cell-size distribution function. It is shown that a Weibull distribution is appropriate for describing the final cell-size distribution. The results are discussed in comparison with Monte Carlo simulation data.

  8. Chem-Prep PZT 95/5 for Neutron Generator Applications: Particle Size Distribution Comparison of Development and Production-Scale Powders

    International Nuclear Information System (INIS)

    SIPOLA, DIANA L.; VOIGT, JAMES A.; LOCKWOOD, STEVEN J.; RODMAN-GONZALES, EMILY D.

    2002-01-01

    The Materials Chemistry Department 1846 has developed a lab-scale chem-prep process for the synthesis of PNZT 95/5, a ferroelectric material that is used in neutron generator power supplies. This process (Sandia Process, or SP) has been successfully transferred to and scaled by Department 14192 (Ceramics and Glass Department), (Transferred Sandia Process, or TSP), to meet the future supply needs of Sandia for its neutron generator production responsibilities. In going from the development-size SP batch (1.6 kg/batch) to the production-scale TSP powder batch size (10 kg/batch), it was important that it be determined if the scaling process caused any ''performance-critical'' changes in the PNZT 95/5 being produced. One area where a difference was found was in the particle size distributions of the calcined PNZT powders. Documented in this SAND report are the results of an experimental study to determine the origin of the differences in the particle size distribution of the SP and TSP powders

  9. FY 1992 research and development project for large-scale industrial technologies. Report on results of R and D of superhigh technological machining systems; 1992 nendo chosentan kako system no kenkyu kaihatsu seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1993-03-01

    Described herein are the FY 1992 results of the R and D project aimed at establishment of the technologies for development of, e.g., machine and electronic device members of superhigh precision and high functions by processing and superhigh-precision machining aided by excited beams. The elementary researches on superhigh-precision machining achieve the given targets for precision stability of the feed positioning device. The researches on development of high-precision rotating devices, on a trial basis, are directed to improvement of rotational precision of pneumatic static pressure bearings and magnetism correction/controlling circuits, increasing speed and precision of 3-point type rotational precision measurement methods, and development of rotation-driving motors, achieving rotational precision of 0.015{mu}m at 2000rpm. The researches on the surface modification technologies aided by ion beams involve experiments for production of crystalline Si films and thin-film transistors of the Si films, using the surface-modified portion of a large-size glass substrate. The researches on superhigh-technological machining standard measurement involve development of length-measuring systems aided by a dye laser, achieving a precision of {+-} 10nm or less in a 100mm measurement range. (NEDO)

  10. Machine Protection

    International Nuclear Information System (INIS)

    Zerlauth, Markus; Schmidt, Rüdiger; Wenninger, Jörg

    2012-01-01

    The present architecture of the machine protection system is being recalled and the performance of the associated systems during the 2011 run will be briefly summarized. An analysis of the causes of beam dumps as well as an assessment of the dependability of the machine protection systems (MPS) itself is being presented. Emphasis will be given to events that risked exposing parts of the machine to damage. Further improvements and mitigations of potential holes in the protection systems will be evaluated along with their impact on the 2012 run. The role of rMPP during the various operational phases (commissioning, intensity ramp up, MDs...) will be discussed along with a proposal for the intensity ramp up for the start of beam operation in 2012

  11. Machine Learning

    Energy Technology Data Exchange (ETDEWEB)

    Chikkagoudar, Satish; Chatterjee, Samrat; Thomas, Dennis G.; Carroll, Thomas E.; Muller, George

    2017-04-21

    The absence of a robust and unified theory of cyber dynamics presents challenges and opportunities for using machine learning based data-driven approaches to further the understanding of the behavior of such complex systems. Analysts can also use machine learning approaches to gain operational insights. In order to be operationally beneficial, cybersecurity machine learning based models need to have the ability to: (1) represent a real-world system, (2) infer system properties, and (3) learn and adapt based on expert knowledge and observations. Probabilistic models and Probabilistic graphical models provide these necessary properties and are further explored in this chapter. Bayesian Networks and Hidden Markov Models are introduced as an example of a widely used data driven classification/modeling strategy.

  12. Machine Protection

    CERN Document Server

    Zerlauth, Markus; Wenninger, Jörg

    2012-01-01

    The present architecture of the machine protection system is being recalled and the performance of the associated systems during the 2011 run will be briefly summarized. An analysis of the causes of beam dumps as well as an assessment of the dependability of the machine protection systems (MPS) itself is being presented. Emphasis will be given to events that risked exposing parts of the machine to damage. Further improvements and mitigations of potential holes in the protection systems will be evaluated along with their impact on the 2012 run. The role of rMPP during the various operational phases (commissioning, intensity ramp up, MDs...) will be discussed along with a proposal for the intensity ramp up for the start of beam operation in 2012.

  13. Machine Protection

    Energy Technology Data Exchange (ETDEWEB)

    Zerlauth, Markus; Schmidt, Rüdiger; Wenninger, Jörg [European Organization for Nuclear Research, Geneva (Switzerland)

    2012-07-01

    The present architecture of the machine protection system is being recalled and the performance of the associated systems during the 2011 run will be briefly summarized. An analysis of the causes of beam dumps as well as an assessment of the dependability of the machine protection systems (MPS) itself is being presented. Emphasis will be given to events that risked exposing parts of the machine to damage. Further improvements and mitigations of potential holes in the protection systems will be evaluated along with their impact on the 2012 run. The role of rMPP during the various operational phases (commissioning, intensity ramp up, MDs...) will be discussed along with a proposal for the intensity ramp up for the start of beam operation in 2012.

  14. Teletherapy machine

    International Nuclear Information System (INIS)

    Panyam, Vinatha S.; Rakshit, Sougata; Kulkarni, M.S.; Pradeepkumar, K.S.

    2017-01-01

    Radiation Standards Section (RSS), RSSD, BARC is the national metrology institute for ionizing radiation. RSS develops and maintains radiation standards for X-ray, beta, gamma and neutron radiations. In radiation dosimetry, traceability, accuracy and consistency of radiation measurements is very important especially in radiotherapy where the success of patient treatment is dependent on the accuracy of the dose delivered to the tumour. Cobalt teletherapy machines have been used in the treatment of cancer since the early 1950s and India had its first cobalt teletherapy machine installed at the Cancer Institute, Chennai in 1956

  15. The distribution of air bubble size in the pneumo-mechanical flotation machine . Rozkład wielkości pęcherzyków powietrza w pneumo-mechanicznej maszynie flotacyjnej

    Science.gov (United States)

    Brożek, Marian; Młynarczykowska, Anna

    2012-12-01

    The flotation rate constant is the value characterizing the kinetics of cyclic flotation. In the statistical theory of flotation its value is the function of probabilities of collision, adhesion and detachment of particle from the air bubble. The particle - air bubble collision plays a key role since there must be a prior collision before the particle - air bubble adhesion happens. The probability of such an event to occur is proportional to the ratio of the particle diameter to the bubble diameter. When the particle size is given, it is possible to control the value of collision probability by means of the size of air bubble. Consequently, it is significant to find the effect of physical and physicochemical factors upon the diameter of air bubbles in the form of a mathematical dependence. In the pneumo-mechanical flotation machine the air bubbles are generated by the blades of the rotor. The dispergation rate is affected by, among others, rotational speed of the rotor, the air flow rate and the liquid surface tension, depending on the type and concentration of applied flotation reagents. In the proposed paper the authors will present the distribution of air bubble diameters on the grounds of the above factors, according to the laws of thermodynamics. The correctness of the derived dependences will be verified empirically.

  16. Nanometer-scale sizing accuracy of particle suspensions on an unmodified cell phone using elastic light scattering.

    Science.gov (United States)

    Smith, Zachary J; Chu, Kaiqin; Wachsmann-Hogiu, Sebastian

    2012-01-01

    We report on the construction of a Fourier plane imaging system attached to a cell phone. By illuminating particle suspensions with a collimated beam from an inexpensive diode laser, angularly resolved scattering patterns are imaged by the phone's camera. Analyzing these patterns with Mie theory results in predictions of size distributions of the particles in suspension. Despite using consumer grade electronics, we extracted size distributions of sphere suspensions with better than 20 nm accuracy in determining the mean size. We also show results from milk, yeast, and blood cells. Performing these measurements on a portable device presents opportunities for field-testing of food quality, process monitoring, and medical diagnosis.

  17. Nanometer-scale sizing accuracy of particle suspensions on an unmodified cell phone using elastic light scattering.

    Directory of Open Access Journals (Sweden)

    Zachary J Smith

    Full Text Available We report on the construction of a Fourier plane imaging system attached to a cell phone. By illuminating particle suspensions with a collimated beam from an inexpensive diode laser, angularly resolved scattering patterns are imaged by the phone's camera. Analyzing these patterns with Mie theory results in predictions of size distributions of the particles in suspension. Despite using consumer grade electronics, we extracted size distributions of sphere suspensions with better than 20 nm accuracy in determining the mean size. We also show results from milk, yeast, and blood cells. Performing these measurements on a portable device presents opportunities for field-testing of food quality, process monitoring, and medical diagnosis.

  18. Is there an optimal pension fund size? A scale-economy analysis of administrative and investment costs

    NARCIS (Netherlands)

    Bikker, J.A.

    2013-01-01

    This paper investigates scale economies and the optimal scale of pension funds, estimating different cost functions with varying assumptions about the shape of the underlying average cost function: Ushaped versus monotonically declining. Using unique data for Dutch pension funds over 1992-2009, we

  19. Pore-Scale Investigation of Micron-Size Polyacrylamide Elastic Microspheres (MPEMs) Transport and Retention in Saturated Porous Media

    KAUST Repository

    Yao, Chuanjin; Lei, Guanglun; Cathles, Lawrence M.; Steenhuis, Tammo S.

    2014-01-01

    Knowledge of micrometer-size polyacrylamide elastic microsphere (MPEM) transport and retention mechanisms in porous media is essential for the application of MPEMs as a smart sweep improvement and profile modification agent in improving oil recovery

  20. Impedance Scaling and Impedance Control

    International Nuclear Information System (INIS)

    Chou, W.; Griffin, J.

    1997-06-01

    When a machine becomes really large, such as the Very Large Hadron Collider (VLHC), of which the circumference could reach the order of megameters, beam instability could be an essential bottleneck. This paper studies the scaling of the instability threshold vs. machine size when the coupling impedance scales in a ''normal'' way. It is shown that the beam would be intrinsically unstable for the VLHC. As a possible solution to this problem, it is proposed to introduce local impedance inserts for controlling the machine impedance. In the longitudinal plane, this could be done by using a heavily detuned rf cavity (e.g., a biconical structure), which could provide large imaginary impedance with the right sign (i.e., inductive or capacitive) while keeping the real part small. In the transverse direction, a carefully designed variation of the cross section of a beam pipe could generate negative impedance that would partially compensate the transverse impedance in one plane

  1. A flexible and cost-effective compensation method for leveling using large-scale coordinate measuring machines and its application in aircraft digital assembly

    Science.gov (United States)

    Deng, Zhengping; Li, Shuanggao; Huang, Xiang

    2018-06-01

    In the assembly process of large-size aerospace products, the leveling and horizontal alignment of large components are essential prior to the installation of an inertial navigation system (INS) and the final quality inspection. In general, the inherent coordinate systems of large-scale coordinate measuring devices are not coincident with the geodetic horizontal system, and a dual-axis compensation system is commonly required for the measurement of difference in heights. These compensation systems are expensive and dedicated designs for different devices at present. Considering that a large-size assembly site usually needs more than one measuring device, a compensation approach which is versatile for different devices would be a more convenient and economic choice for manufacturers. In this paper, a flexible and cost-effective compensation method is proposed. Firstly, an auxiliary measuring device called a versatile compensation fixture (VCF) is designed, which mainly comprises reference points for coordinate transformation and a dual-axis inclinometer, and a kind of network tighten points (NTPs) are introduced and temporarily deployed in the large measuring space to further reduce transformation error. Secondly, the measuring principle of height difference is studied, based on coordinate transformation theory and trigonometry while considering the effects of earth curvature, and the coordinate transformation parameters are derived by least squares adjustment. Thirdly, the analytical solution of leveling uncertainty is analyzed, based on which the key parameters of the VCF and the proper deployment of NTPs are determined according to the leveling accuracy requirement. Furthermore, the proposed method is practically applied to the assembly of a large helicopter by developing an automatic leveling and alignment system. By measuring four NTPs, the leveling uncertainty (2σ) is reduced by 29.4% to about 0.12 mm, compared with that without NTPs.

  2. Bypassing the Kohn-Sham equations with machine learning.

    Science.gov (United States)

    Brockherde, Felix; Vogt, Leslie; Li, Li; Tuckerman, Mark E; Burke, Kieron; Müller, Klaus-Robert

    2017-10-11

    Last year, at least 30,000 scientific papers used the Kohn-Sham scheme of density functional theory to solve electronic structure problems in a wide variety of scientific fields. Machine learning holds the promise of learning the energy functional via examples, bypassing the need to solve the Kohn-Sham equations. This should yield substantial savings in computer time, allowing larger systems and/or longer time-scales to be tackled, but attempts to machine-learn this functional have been limited by the need to find its derivative. The present work overcomes this difficulty by directly learning the density-potential and energy-density maps for test systems and various molecules. We perform the first molecular dynamics simulation with a machine-learned density functional on malonaldehyde and are able to capture the intramolecular proton transfer process. Learning density models now allows the construction of accurate density functionals for realistic molecular systems.Machine learning allows electronic structure calculations to access larger system sizes and, in dynamical simulations, longer time scales. Here, the authors perform such a simulation using a machine-learned density functional that avoids direct solution of the Kohn-Sham equations.

  3. Machine testning

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo

    This document is used in connection with a laboratory exercise of 3 hours duration as a part of the course GEOMETRICAL METROLOGY AND MACHINE TESTING. The exercise includes a series of tests carried out by the student on a conventional and a numerically controled lathe, respectively. This document...

  4. Application of Network Scale Up Method in the Estimation of Population Size for Men Who Have Sex with Men in Shanghai, China.

    Directory of Open Access Journals (Sweden)

    Jun Wang

    Full Text Available Men who have sex with men (MSM are at high risk of HIV infection. For developing proper interventions, it is important to know the size of MSM population. However, size estimation of MSM populations is still a significant public health challenge due to high cost, hard to reach and stigma associated with the population.We aimed to estimate the social network size (c value in general population and the size of MSM population in Shanghai, China by using the net work scale-up method.A multistage random sampling was used to recruit participants aged from 18 to 60 years who had lived in Shanghai for at least 6 months. The "known population method" with adjustment of backward estimation and regression model was applied to estimate the c value. And the MSM population size was further estimated using an adjusted c value taking into account for the transmission effect through social respect level towards MSM.A total of 4017 participants were contacted for an interview, and 3907 participants met the inclusion criterion. The social network size (c value of participants was 236 after adjustment. The estimated size of MSM was 36354 (95% CI: 28489-44219 for the male Shanghaies aged 18 to 60 years, and the proportion of MSM among the total male population aged 18 to 60 years in Shanghai was 0.28%.We employed the network scale-up method and used a wide range of data sources to estimate the size of MSM population in Shanghai, which is useful for HIV prevention and intervention among the target population.

  5. International orientation and export commitment in fast small and medium size firms internationalization: scales validation and implications for the Brazilian case

    Directory of Open Access Journals (Sweden)

    Marcelo André Machado

    Full Text Available Abstract A set of changes in the competitive environment has recently provoked the emergence of a new kind of organization that has since its creation a meaningful share of its revenue being originated from international activities developed in more than one continent. Within this new reality, the internationalization of the firm in phases or according to its growth has resulted in it losing its capacity to explain this process with regard to small- and medium-sized enterprises (SME. Thus, in this paper, the international orientation (IO and export commitment (EC constructs have been revised under a theoretical context of the fast internationalization of medium-sized companies, so as to identify scales that more accurately measure these dimensions in the Brazilian setting. After a literature review and an exploratory research, the IO and EC scales proposed by Knight and Cavusgil (2004 and Shamsuddoha and Ali (2006 were respectively applied to a sample of 398 small- and medium-sized exporting Brazilian companies. In spite of conjunction and situation differences inherent to the Brazilian companies, the selected scales presented high measuring reliability. Furthermore, the field research outcomes provide evidence for the existence of a phenomenon of fast internationalization in medium-sized companies in Brazil, as well as support some theoretical assumptions of other empirical investigations carried out with samples from developed countries.

  6. Machine rates for selected forest harvesting machines

    Science.gov (United States)

    R.W. Brinker; J. Kinard; Robert Rummer; B. Lanford

    2002-01-01

    Very little new literature has been published on the subject of machine rates and machine cost analysis since 1989 when the Alabama Agricultural Experiment Station Circular 296, Machine Rates for Selected Forest Harvesting Machines, was originally published. Many machines discussed in the original publication have undergone substantial changes in various aspects, not...

  7. Effects of Sizes and Conformations of Fish-Scale Collagen Peptides on Facial Skin Qualities and Transdermal Penetration Efficiency

    OpenAIRE

    Chai, Huey-Jine; Li, Jing-Hua; Huang, Han-Ning; Li, Tsung-Lin; Chan, Yi-Lin; Shiau, Chyuan-Yuan; Wu, Chang-Jer

    2010-01-01

    Fish-scale collagen peptides (FSCPs) were prepared using a given combination of proteases to hydrolyze tilapia (Oreochromis sp.) scales. FSCPs were determined to stimulate fibroblast cells proliferation and procollagen synthesis in a time- and dose-dependent manner. The transdermal penetration capabilities of the fractionationed FSCPs were evaluated using the Franz-type diffusion cell model. The heavier FSCPs, 3500 and 4500?Da, showed higher cumulative penetration capability as opposed to the...

  8. The underestimated role of temperature-oxygen relationship in large-scale studies on size-to-temperature response.

    Science.gov (United States)

    Walczyńska, Aleksandra; Sobczyk, Łukasz

    2017-09-01

    The observation that ectotherm size decreases with increasing temperature (temperature-size rule; TSR) has been widely supported. This phenomenon intrigues researchers because neither its adaptive role nor the conditions under which it is realized are well defined. In light of recent theoretical and empirical studies, oxygen availability is an important candidate for understanding the adaptive role behind TSR. However, this hypothesis is still undervalued in TSR studies at the geographical level. We reanalyzed previously published data about the TSR pattern in diatoms sampled from Icelandic geothermal streams, which concluded that diatoms were an exception to the TSR. Our goal was to incorporate oxygen as a factor in the analysis and to examine whether this approach would change the results. Specifically, we expected that the strength of size response to cold temperatures would be different than the strength of response to hot temperatures, where the oxygen limitation is strongest. By conducting a regression analysis for size response at the community level, we found that diatoms from cold, well-oxygenated streams showed no size-to-temperature response, those from intermediate temperature and oxygen conditions showed reverse TSR, and diatoms from warm, poorly oxygenated streams showed significant TSR. We also distinguished the roles of oxygen and nutrition in TSR. Oxygen is a driving factor, while nutrition is an important factor that should be controlled for. Our results show that if the geographical or global patterns of TSR are to be understood, oxygen should be included in the studies. This argument is important especially for predicting the size response of ectotherms facing climate warming.

  9. Charging machine

    International Nuclear Information System (INIS)

    Medlin, J.B.

    1976-01-01

    A charging machine for loading fuel slugs into the process tubes of a nuclear reactor includes a tubular housing connected to the process tube, a charging trough connected to the other end of the tubular housing, a device for loading the charging trough with a group of fuel slugs, means for equalizing the coolant pressure in the charging trough with the pressure in the process tubes, means for pushing the group of fuel slugs into the process tube and a latch and a seal engaging the last object in the group of fuel slugs to prevent the fuel slugs from being ejected from the process tube when the pusher is removed and to prevent pressure liquid from entering the charging machine. 3 claims, 11 drawing figures

  10. Detection of atomic scale changes in the free volume void size of three-dimensional colorectal cancer cell culture using positron annihilation lifetime spectroscopy.

    Science.gov (United States)

    Axpe, Eneko; Lopez-Euba, Tamara; Castellanos-Rubio, Ainara; Merida, David; Garcia, Jose Angel; Plaza-Izurieta, Leticia; Fernandez-Jimenez, Nora; Plazaola, Fernando; Bilbao, Jose Ramon

    2014-01-01

    Positron annihilation lifetime spectroscopy (PALS) provides a direct measurement of the free volume void sizes in polymers and biological systems. This free volume is critical in explaining and understanding physical and mechanical properties of polymers. Moreover, PALS has been recently proposed as a potential tool in detecting cancer at early stages, probing the differences in the subnanometer scale free volume voids between cancerous/healthy skin samples of the same patient. Despite several investigations on free volume in complex cancerous tissues, no positron annihilation studies of living cancer cell cultures have been reported. We demonstrate that PALS can be applied to the study in human living 3D cell cultures. The technique is also capable to detect atomic scale changes in the size of the free volume voids due to the biological responses to TGF-β. PALS may be developed to characterize the effect of different culture conditions in the free volume voids of cells grown in vitro.

  11. Genesis machines

    CERN Document Server

    Amos, Martyn

    2014-01-01

    Silicon chips are out. Today's scientists are using real, wet, squishy, living biology to build the next generation of computers. Cells, gels and DNA strands are the 'wetware' of the twenty-first century. Much smaller and more intelligent, these organic computers open up revolutionary possibilities. Tracing the history of computing and revealing a brave new world to come, Genesis Machines describes how this new technology will change the way we think not just about computers - but about life itself.

  12. Estimating sample size for landscape-scale mark-recapture studies of North American migratory tree bats

    Science.gov (United States)

    Ellison, Laura E.; Lukacs, Paul M.

    2014-01-01

    Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.

  13. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time.

    Science.gov (United States)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.

  14. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time

    Science.gov (United States)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which—as shown on the contact process—provides a significant improvement of the large deviation function estimators compared to the standard one.

  15. Finite-Size Scaling in a Two-Temperature Lattice Gas: a Monte Carlo Study of Critical Properties

    DEFF Research Database (Denmark)

    Larsen, Heine; Præstgaard, Eigil; Zia, R.K.P.

    1994-01-01

    We present computer studies of the critical properties of an Ising lattice gas driven to a non-equilibrium steady state by coupling to two temperature baths. Anisotropic scaling, a dominant feature near criticality, is used as a tool to extract the values of the critical temperature and some expo...

  16. Effects of social organization, trap arrangement and density, sampling scale, and population density on bias in population size estimation using some common mark-recapture estimators.

    Directory of Open Access Journals (Sweden)

    Manan Gupta

    Full Text Available Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More importantly, the potential problem for the use of mark-recapture methods posed by social organization in general has not been systematically addressed. We developed an individual-based simulation framework to systematically examine the potential effects of type of social organization, as well as other factors such as trap density and arrangement, spatial scale of sampling, and population density, on bias in population sizes estimated by POPAN, Robust Design, and Robust Design with detection heterogeneity. In the present study, we ran simulations with biological, demographic and ecological parameters relevant to Asian elephant populations, but the simulation framework is easily extended to address questions relevant to other social species. We collected capture history data from the simulations, and used those data to test for bias in population size estimation. Social organization significantly affected bias in most analyses, but the effect sizes were variable, depending on other factors. Social organization tended to introduce large bias when trap arrangement was uniform and sampling effort was low. POPAN clearly outperformed the two Robust Design models we tested, yielding close to zero bias if traps were arranged at random in the study area, and when population density and trap density were not too low. Social organization did not have a major effect on bias for these parameter combinations at which POPAN gave more or less unbiased population size estimates

  17. A laboratory scale approach to polymer solar cells using one coating/printing machine, flexible substrates, no ITO, no vacuum and no spincoating

    DEFF Research Database (Denmark)

    Carlé, Jon Eggert; Andersen, Thomas Rieks; Helgesen, Martin

    2013-01-01

    Printing of the silver back electrode under ambient conditions using simple laboratory equipment has been the missing link to fully replace evaporated metal electrodes. Here we demonstrate how a recently developed roll coater is further developed into a single machine that enables processing of a......–tin-oxide (ITO) or vacuum evaporation steps making it a significant step beyond the traditional laboratory polymer solar cell processing methods involving spin coating and metal evaporation....

  18. Dynamic fatigue of a machinable glass-ceramic

    Science.gov (United States)

    Smyth, K. K.; Magida, M. B.

    1983-01-01

    To assess the stress-corrosion susceptibility of a machinable glass-ceramic, its dynamic fatigue behavior was investigated by measuring its strength as a function of stress rate. Fracture mechanics techniques were used to analyze the results for the purpose of making lifetime predictions for components of this material. This material was concluded to have only moderate resistance (N = 30) to stress corrosion in ambient conditions. The effects of specimen size on strength were assessed for the material used in this study; it was concluded that the Weibull edge-flaw scaling law adequately describes the observed strength-size relation.

  19. Making extreme computations possible with virtual machines

    International Nuclear Information System (INIS)

    Reuter, J.; Chokoufe Nejad, B.

    2016-02-01

    State-of-the-art algorithms generate scattering amplitudes for high-energy physics at leading order for high-multiplicity processes as compiled code (in Fortran, C or C++). For complicated processes the size of these libraries can become tremendous (many GiB). We show that amplitudes can be translated to byte-code instructions, which even reduce the size by one order of magnitude. The byte-code is interpreted by a Virtual Machine with runtimes comparable to compiled code and a better scaling with additional legs. We study the properties of this algorithm, as an extension of the Optimizing Matrix Element Generator (O'Mega). The bytecode matrix elements are available as alternative input for the event generator WHIZARD. The bytecode interpreter can be implemented very compactly, which will help with a future implementation on massively parallel GPUs.

  20. Size effect and scaling power-law for superelasticity in shape-memory alloys at the nanoscale.

    Science.gov (United States)

    Gómez-Cortés, Jose F; Nó, Maria L; López-Ferreño, Iñaki; Hernández-Saz, Jesús; Molina, Sergio I; Chuvilin, Andrey; San Juan, Jose M

    2017-08-01

    Shape-memory alloys capable of a superelastic stress-induced phase transformation and a high displacement actuation have promise for applications in micro-electromechanical systems for wearable healthcare and flexible electronic technologies. However, some of the fundamental aspects of their nanoscale behaviour remain unclear, including the question of whether the critical stress for the stress-induced martensitic transformation exhibits a size effect similar to that observed in confined plasticity. Here we provide evidence of a strong size effect on the critical stress that induces such a transformation with a threefold increase in the trigger stress in pillars milled on [001] L2 1 single crystals from a Cu-Al-Ni shape-memory alloy from 2 μm to 260 nm in diameter. A power-law size dependence of n = -2 is observed for the nanoscale superelasticity. Our observation is supported by the atomic lattice shearing and an elastic model for homogeneous martensite nucleation.

  1. A comparative analysis of support vector machines and extreme learning machines.

    Science.gov (United States)

    Liu, Xueyi; Gao, Chuanhou; Li, Ping

    2012-09-01

    The theory of extreme learning machines (ELMs) has recently become increasingly popular. As a new learning algorithm for single-hidden-layer feed-forward neural networks, an ELM offers the advantages of low computational cost, good generalization ability, and ease of implementation. Hence the comparison and model selection between ELMs and other kinds of state-of-the-art machine learning approaches has become significant and has attracted many research efforts. This paper performs a comparative analysis of the basic ELMs and support vector machines (SVMs) from two viewpoints that are different from previous works: one is the Vapnik-Chervonenkis (VC) dimension, and the other is their performance under different training sample sizes. It is shown that the VC dimension of an ELM is equal to the number of hidden nodes of the ELM with probability one. Additionally, their generalization ability and computational complexity are exhibited with changing training sample size. ELMs have weaker generalization ability than SVMs for small sample but can generalize as well as SVMs for large sample. Remarkably, great superiority in computational speed especially for large-scale sample problems is found in ELMs. The results obtained can provide insight into the essential relationship between them, and can also serve as complementary knowledge for their past experimental and theoretical comparisons. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Annotated Draft Genome Assemblies for the Northern Bobwhite (Colinus virginianus and the Scaled Quail (Callipepla squamata Reveal Disparate Estimates of Modern Genome Diversity and Historic Effective Population Size

    Directory of Open Access Journals (Sweden)

    David L. Oldeschulte

    2017-09-01

    Full Text Available Northern bobwhite (Colinus virginianus; hereafter bobwhite and scaled quail (Callipepla squamata populations have suffered precipitous declines across most of their US ranges. Illumina-based first- (v1.0 and second- (v2.0 generation draft genome assemblies for the scaled quail and the bobwhite produced N50 scaffold sizes of 1.035 and 2.042 Mb, thereby producing a 45-fold improvement in contiguity over the existing bobwhite assembly, and ≥90% of the assembled genomes were captured within 1313 and 8990 scaffolds, respectively. The scaled quail assembly (v1.0 = 1.045 Gb was ∼20% smaller than the bobwhite (v2.0 = 1.254 Gb, which was supported by kmer-based estimates of genome size. Nevertheless, estimates of GC content (41.72%; 42.66%, genome-wide repetitive content (10.40%; 10.43%, and MAKER-predicted protein coding genes (17,131; 17,165 were similar for the scaled quail (v1.0 and bobwhite (v2.0 assemblies, respectively. BUSCO analyses utilizing 3023 single-copy orthologs revealed a high level of assembly completeness for the scaled quail (v1.0; 84.8% and the bobwhite (v2.0; 82.5%, as verified by comparison with well-established avian genomes. We also detected 273 putative segmental duplications in the scaled quail genome (v1.0, and 711 in the bobwhite genome (v2.0, including some that were shared among both species. Autosomal variant prediction revealed ∼2.48 and 4.17 heterozygous variants per kilobase within the scaled quail (v1.0 and bobwhite (v2.0 genomes, respectively, and estimates of historic effective population size were uniformly higher for the bobwhite across all time points in a coalescent model. However, large-scale declines were predicted for both species beginning ∼15–20 KYA.

  3. Annotated Draft Genome Assemblies for the Northern Bobwhite (Colinus virginianus) and the Scaled Quail (Callipepla squamata) Reveal Disparate Estimates of Modern Genome Diversity and Historic Effective Population Size.

    Science.gov (United States)

    Oldeschulte, David L; Halley, Yvette A; Wilson, Miranda L; Bhattarai, Eric K; Brashear, Wesley; Hill, Joshua; Metz, Richard P; Johnson, Charles D; Rollins, Dale; Peterson, Markus J; Bickhart, Derek M; Decker, Jared E; Sewell, John F; Seabury, Christopher M

    2017-09-07

    Northern bobwhite ( Colinus virginianus ; hereafter bobwhite) and scaled quail ( Callipepla squamata ) populations have suffered precipitous declines across most of their US ranges. Illumina-based first- (v1.0) and second- (v2.0) generation draft genome assemblies for the scaled quail and the bobwhite produced N50 scaffold sizes of 1.035 and 2.042 Mb, thereby producing a 45-fold improvement in contiguity over the existing bobwhite assembly, and ≥90% of the assembled genomes were captured within 1313 and 8990 scaffolds, respectively. The scaled quail assembly (v1.0 = 1.045 Gb) was ∼20% smaller than the bobwhite (v2.0 = 1.254 Gb), which was supported by kmer-based estimates of genome size. Nevertheless, estimates of GC content (41.72%; 42.66%), genome-wide repetitive content (10.40%; 10.43%), and MAKER-predicted protein coding genes (17,131; 17,165) were similar for the scaled quail (v1.0) and bobwhite (v2.0) assemblies, respectively. BUSCO analyses utilizing 3023 single-copy orthologs revealed a high level of assembly completeness for the scaled quail (v1.0; 84.8%) and the bobwhite (v2.0; 82.5%), as verified by comparison with well-established avian genomes. We also detected 273 putative segmental duplications in the scaled quail genome (v1.0), and 711 in the bobwhite genome (v2.0), including some that were shared among both species. Autosomal variant prediction revealed ∼2.48 and 4.17 heterozygous variants per kilobase within the scaled quail (v1.0) and bobwhite (v2.0) genomes, respectively, and estimates of historic effective population size were uniformly higher for the bobwhite across all time points in a coalescent model. However, large-scale declines were predicted for both species beginning ∼15-20 KYA. Copyright © 2017 Oldeschulte et al.

  4. Risk estimation using probability machines

    Science.gov (United States)

    2014-01-01

    Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306

  5. Effects of Sizes and Conformations of Fish-Scale Collagen Peptides on Facial Skin Qualities and Transdermal Penetration Efficiency

    Directory of Open Access Journals (Sweden)

    Huey-Jine Chai

    2010-01-01

    Full Text Available Fish-scale collagen peptides (FSCPs were prepared using a given combination of proteases to hydrolyze tilapia (Oreochromis sp. scales. FSCPs were determined to stimulate fibroblast cells proliferation and procollagen synthesis in a time- and dose-dependent manner. The transdermal penetration capabilities of the fractionationed FSCPs were evaluated using the Franz-type diffusion cell model. The heavier FSCPs, 3500 and 4500 Da, showed higher cumulative penetration capability as opposed to the lighter FSCPs, 2000 and 1300 Da. In addition, the heavier seemed to preserve favorable coiled structures comparing to the lighter that presents mainly as linear under confocal scanning laser microscopy. FSCPs, particularly the heavier, were concluded to efficiently penetrate stratum corneum to epidermis and dermis, activate fibroblasts, and accelerate collagen synthesis. The heavier outweighs the lighter in transdermal penetration likely as a result of preserving the given desired structure feature.

  6. Turbulence-enhanced prey encounter rates in larval fish : Effects of spatial scale, larval behaviour and size

    DEFF Research Database (Denmark)

    Kiørboe, Thomas; MacKenzie, Brian

    1995-01-01

    Turbulent water motion has several effects on the feeding ecology of larval fish and other planktivorous predators. In this paper, we consider the appropriate spatial scales for estimating relative velocities between larval fish predators and their prey, and the effect that different choices of s...... in the range in which turbulent intensity has an overall positive effect on larval fish ingestion rate probability. However, experimental data to test the model predictions are lacking. We suggest that the model inputs require further empirical study....

  7. Size variation and collapse of emphysema holes at inspiration and expiration CT scan: evaluation with modified length scale method and image co-registration.

    Science.gov (United States)

    Oh, Sang Young; Lee, Minho; Seo, Joon Beom; Kim, Namkug; Lee, Sang Min; Lee, Jae Seung; Oh, Yeon Mok

    2017-01-01

    A novel approach of size-based emphysema clustering has been developed, and the size variation and collapse of holes in emphysema clusters are evaluated at inspiratory and expiratory computed tomography (CT). Thirty patients were visually evaluated for the size-based emphysema clustering technique and a total of 72 patients were evaluated for analyzing collapse of the emphysema hole in this study. A new approach for the size differentiation of emphysema holes was developed using the length scale, Gaussian low-pass filtering, and iteration approach. Then, the volumetric CT results of the emphysema patients were analyzed using the new method, and deformable registration was carried out between inspiratory and expiratory CT. Blind visual evaluations of EI by two readers had significant correlations with the classification using the size-based emphysema clustering method ( r -values of reader 1: 0.186, 0.890, 0.915, and 0.941; reader 2: 0.540, 0.667, 0.919, and 0.942). The results of collapse of emphysema holes using deformable registration were compared with the pulmonary function test (PFT) parameters using the Pearson's correlation test. The mean extents of low-attenuation area (LAA), E1 (holes may be useful for understanding the dynamic collapse of emphysema and its functional relation.

  8. A Field Study of Pixel-Scale Variability of Raindrop Size Distribution in the MidAtlantic Region

    Science.gov (United States)

    Tokay, Ali; D'adderio, Leo Pio; Wolff, David P.; Petersen, Walter A.

    2016-01-01

    The spatial variability of parameters of the raindrop size distribution and its derivatives is investigated through a field study where collocated Particle Size and Velocity (Parsivel2) and two-dimensional video disdrometers were operated at six sites at Wallops Flight Facility, Virginia, from December 2013 to March 2014. The three-parameter exponential function was employed to determine the spatial variability across the study domain where the maximum separation distance was 2.3 km. The nugget parameter of the exponential function was set to 0.99 and the correlation distance d0 and shape parameter s0 were retrieved by minimizing the root-mean-square error, after fitting it to the correlations of physical parameters. Fits were very good for almost all 15 physical parameters. The retrieved d0 and s0 were about 4.5 km and 1.1, respectively, for rain rate (RR) when all 12 disdrometers were reporting rainfall with a rain-rate threshold of 0.1 mm h1 for 1-min averages. The d0 decreased noticeably when one or more disdrometers were required to report rain. The d0 was considerably different for a number of parameters (e.g., mass-weighted diameter) but was about the same for the other parameters (e.g., RR) when rainfall threshold was reset to 12 and 18 dBZ for Ka- and Ku-band reflectivity, respectively, following the expected Global Precipitation Measurement missions spaceborne radar minimum detectable signals. The reduction of the database through elimination of a site did not alter d0 as long as the fit was adequate. The correlations of 5-min rain accumulations were lower when disdrometer observations were simulated for a rain gauge at different bucket sizes.

  9. Feeding rates in the chaetognath Sagitta elegans : effects of prey size, prey swimming behaviour and small-scale turbulence

    DEFF Research Database (Denmark)

    Saito, H.; Kiørboe, Thomas

    2001-01-01

    distances. We develop a simple prey encounter rate model by describing the swimming prey as a 'force dipole' and assuming that a critical signal strength is required to elicit an attack. By fitting the model to the observations, a critical signal strength of 10(-2) cm s(-1) is estimated; this is very...... at rates up to an order of magnitude higher than similarly sized females, probably owing to differences in swimming behaviour. Sagitta elegans is an ambush predator that perceives its prey by hydromechanical signals. Faster swimming prey generates stronger signals and is, hence, perceived at longer...

  10. Scaling-Up Effective Language and Literacy Instruction: Evaluating the Importance of Scripting and Group Size Components

    DEFF Research Database (Denmark)

    Bleses, Dorthe; Højen, Anders; Dale, Philip

    2018-01-01

    participated in a cluster-randomized evaluation of three variations of a language-literacy focused curriculum (LEAP) comprising 40 twice-weekly 30-min lessons. LEAP-LARGE and LEAP-SMALL conditions involved educators’ implementation of a scope and sequence of objectives using scripted lessons provided to whole......-class and small groups, respectively. In LEAP-OPEN, educators followed the scope and sequence but were allowed to determine the instructional activities for each of 40 lessons (i.e., they received no scripted lessons). A business-as-usual (BAU) condition served as the control. Overall, the largest effect sizes...

  11. A small-scale comparison of Iceland scallop size distributions obtained from a camera based autonomous underwater vehicle and dredge survey.

    Directory of Open Access Journals (Sweden)

    Warsha Singh

    Full Text Available An approach is developed to estimate size of Iceland scallop shells from AUV photos. A small-scale camera based AUV survey of Iceland scallops was conducted at a defined site off West Iceland. Prior to height estimation of the identified shells, the distortions introduced by the vehicle orientation and the camera lens were corrected. The average AUV pitch and roll was 1.3 and 2.3 deg that resulted in <2% error in ground distance rendering these effects negligible. A quadratic polynomial model was identified for lens distortion correction. This model successfully predicted a theoretical grid from a frame photographed underwater, representing the inherent lens distortion. The predicted shell heights were scaled for the distance from the bottom at which the photos were taken. This approach was validated by height estimation of scallops of known sizes. An underestimation of approximately 0.5 cm was seen, which could be attributed to pixel error, where each pixel represented 0.24 x 0.27 cm. After correcting for this difference the estimated heights ranged from 3.8-9.3 cm. A comparison of the height-distribution from a small-scale dredge survey carried out in the vicinity showed non-overlapping peaks in size distribution, with scallops of a broader size range visible in the AUV survey. Further investigations are necessary to evaluate any underlying bias and to validate how representative these surveys are of the true population. The low resolution images made identification of smaller scallops difficult. Overall, the observations of very few small scallops in both surveys could be attributed to low recruitment levels in the recent years due to the known scallop parasite outbreak in the region.

  12. A small-scale comparison of Iceland scallop size distributions obtained from a camera based autonomous underwater vehicle and dredge survey.

    Science.gov (United States)

    Singh, Warsha; Örnólfsdóttir, Erla B; Stefansson, Gunnar

    2014-01-01

    An approach is developed to estimate size of Iceland scallop shells from AUV photos. A small-scale camera based AUV survey of Iceland scallops was conducted at a defined site off West Iceland. Prior to height estimation of the identified shells, the distortions introduced by the vehicle orientation and the camera lens were corrected. The average AUV pitch and roll was 1.3 and 2.3 deg that resulted in <2% error in ground distance rendering these effects negligible. A quadratic polynomial model was identified for lens distortion correction. This model successfully predicted a theoretical grid from a frame photographed underwater, representing the inherent lens distortion. The predicted shell heights were scaled for the distance from the bottom at which the photos were taken. This approach was validated by height estimation of scallops of known sizes. An underestimation of approximately 0.5 cm was seen, which could be attributed to pixel error, where each pixel represented 0.24 x 0.27 cm. After correcting for this difference the estimated heights ranged from 3.8-9.3 cm. A comparison of the height-distribution from a small-scale dredge survey carried out in the vicinity showed non-overlapping peaks in size distribution, with scallops of a broader size range visible in the AUV survey. Further investigations are necessary to evaluate any underlying bias and to validate how representative these surveys are of the true population. The low resolution images made identification of smaller scallops difficult. Overall, the observations of very few small scallops in both surveys could be attributed to low recruitment levels in the recent years due to the known scallop parasite outbreak in the region.

  13. Representational Machines

    DEFF Research Database (Denmark)

    Photography not only represents space. Space is produced photographically. Since its inception in the 19th century, photography has brought to light a vast array of represented subjects. Always situated in some spatial order, photographic representations have been operatively underpinned by social...... to the enterprises of the medium. This is the subject of Representational Machines: How photography enlists the workings of institutional technologies in search of establishing new iconic and social spaces. Together, the contributions to this edited volume span historical epochs, social environments, technological...... possibilities, and genre distinctions. Presenting several distinct ways of producing space photographically, this book opens a new and important field of inquiry for photography research....

  14. Shear machines

    International Nuclear Information System (INIS)

    Astill, M.; Sunderland, A.; Waine, M.G.

    1980-01-01

    A shear machine for irradiated nuclear fuel elements has a replaceable shear assembly comprising a fuel element support block, a shear blade support and a clamp assembly which hold the fuel element to be sheared in contact with the support block. A first clamp member contacts the fuel element remote from the shear blade and a second clamp member contacts the fuel element adjacent the shear blade and is advanced towards the support block during shearing to compensate for any compression of the fuel element caused by the shear blade (U.K.)

  15. Optimal sizing of utility-scale photovoltaic power generation complementarily operating with hydropower: A case study of the world’s largest hydro-photovoltaic plant

    International Nuclear Information System (INIS)

    Fang, Wei; Huang, Qiang; Huang, Shengzhi; Yang, Jie; Meng, Erhao; Li, Yunyun

    2017-01-01

    Highlights: • Feasibility of complementary hydro-photovoltaic operation across the world is revealed. • Three scenarios of the novel operation mode are proposed to satisfy different load demand. • A method for optimally sizing a utility-scale photovoltaic plant is developed by maximizing the net revenue during lifetime. • The influence of complementary hydro-photovoltaic operation upon water resources allocation is investigated. - Abstract: The high variability of solar energy makes utility-scale photovoltaic power generation confront huge challenges to penetrate into power system. In this paper, the complementary hydro-photovoltaic operation is explored, aiming at improving the power quality of photovoltaic and promoting the integration of photovoltaic into the system. First, solar-rich and hydro-rich regions across the world are revealed, which are suitable for implementing the complementary hydro-photovoltaic operation. Then, three practical scenarios of the novel operation mode are proposed for better satisfying different types of load demand. Moreover, a method for optimal sizing of a photovoltaic plant integrated into a hydropower plant is developed by maximizing the net revenue during lifetime. Longyangxia complementary hydro-photovoltaic project, the current world’s largest hydro-photovoltaic power plant, is selected as a case study and its optimal photovoltaic capacities of different scenarios are calculated. Results indicate that hydropower installed capacity and annual solar curtailment rate play crucial roles in the size optimization of a photovoltaic plant and complementary hydro-photovoltaic operation exerts little adverse effect upon the water resources allocation of Longyangxia reservoir. The novel operation mode not only improves the penetration of utility-scale photovoltaic power generation but also can provide a valuable reference for the large-scale utilization of other kinds of renewable energy worldwide.

  16. Nanomedicine: tiny particles and machines give huge gains.

    Science.gov (United States)

    Tong, Sheng; Fine, Eli J; Lin, Yanni; Cradick, Thomas J; Bao, Gang

    2014-02-01

    Nanomedicine is an emerging field that integrates nanotechnology, biomolecular engineering, life sciences and medicine; it is expected to produce major breakthroughs in medical diagnostics and therapeutics. Nano-scale structures and devices are compatible in size with proteins and nucleic acids in living cells. Therefore, the design, characterization and application of nano-scale probes, carriers and machines may provide unprecedented opportunities for achieving a better control of biological processes, and drastic improvements in disease detection, therapy, and prevention. Recent advances in nanomedicine include the development of nanoparticle (NP)-based probes for molecular imaging, nano-carriers for drug/gene delivery, multifunctional NPs for theranostics, and molecular machines for biological and medical studies. This article provides an overview of the nanomedicine field, with an emphasis on NPs for imaging and therapy, as well as engineered nucleases for genome editing. The challenges in translating nanomedicine approaches to clinical applications are discussed.

  17. Intraflagellar transport particle size scales inversely with flagellar length: revisiting the balance-point length control model.

    Science.gov (United States)

    Engel, Benjamin D; Ludington, William B; Marshall, Wallace F

    2009-10-05

    The assembly and maintenance of eukaryotic flagella are regulated by intraflagellar transport (IFT), the bidirectional traffic of IFT particles (recently renamed IFT trains) within the flagellum. We previously proposed the balance-point length control model, which predicted that the frequency of train transport should decrease as a function of flagellar length, thus modulating the length-dependent flagellar assembly rate. However, this model was challenged by the differential interference contrast microscopy observation that IFT frequency is length independent. Using total internal reflection fluorescence microscopy to quantify protein traffic during the regeneration of Chlamydomonas reinhardtii flagella, we determined that anterograde IFT trains in short flagella are composed of more kinesin-associated protein and IFT27 proteins than trains in long flagella. This length-dependent remodeling of train size is consistent with the kinetics of flagellar regeneration and supports a revised balance-point model of flagellar length control in which the size of anterograde IFT trains tunes the rate of flagellar assembly.

  18. Electricity of machine tool

    International Nuclear Information System (INIS)

    Gijeon media editorial department

    1977-10-01

    This book is divided into three parts. The first part deals with electricity machine, which can taints from generator to motor, motor a power source of machine tool, electricity machine for machine tool such as switch in main circuit, automatic machine, a knife switch and pushing button, snap switch, protection device, timer, solenoid, and rectifier. The second part handles wiring diagram. This concludes basic electricity circuit of machine tool, electricity wiring diagram in your machine like milling machine, planer and grinding machine. The third part introduces fault diagnosis of machine, which gives the practical solution according to fault diagnosis and the diagnostic method with voltage and resistance measurement by tester.

  19. Environmentally Friendly Machining

    CERN Document Server

    Dixit, U S; Davim, J Paulo

    2012-01-01

    Environment-Friendly Machining provides an in-depth overview of environmentally-friendly machining processes, covering numerous different types of machining in order to identify which practice is the most environmentally sustainable. The book discusses three systems at length: machining with minimal cutting fluid, air-cooled machining and dry machining. Also covered is a way to conserve energy during machining processes, along with useful data and detailed descriptions for developing and utilizing the most efficient modern machining tools. Researchers and engineers looking for sustainable machining solutions will find Environment-Friendly Machining to be a useful volume.

  20. Machine Protection

    CERN Document Server

    Schmidt, R

    2014-01-01

    The protection of accelerator equipment is as old as accelerator technology and was for many years related to high-power equipment. Examples are the protection of powering equipment from overheating (magnets, power converters, high-current cables), of superconducting magnets from damage after a quench and of klystrons. The protection of equipment from beam accidents is more recent. It is related to the increasing beam power of high-power proton accelerators such as ISIS, SNS, ESS and the PSI cyclotron, to the emission of synchrotron light by electron–positron accelerators and FELs, and to the increase of energy stored in the beam (in particular for hadron colliders such as LHC). Designing a machine protection system requires an excellent understanding of accelerator physics and operation to anticipate possible failures that could lead to damage. Machine protection includes beam and equipment monitoring, a system to safely stop beam operation (e.g. dumping the beam or stopping the beam at low energy) and an ...

  1. Magnetic susceptibility of road deposited sediments at a national scale – Relation to population size and urban pollution

    International Nuclear Information System (INIS)

    Jordanova, Diana; Jordanova, Neli; Petrov, Petar

    2014-01-01

    Magnetic properties of road dusts from 26 urban sites in Bulgaria are studied. Temporal variations of magnetic susceptibility (χ) during eighteen months monitoring account for approximately 1/3rd of the mean annual values. Analysis of heavy metal contents and magnetic parameters for the fraction d  2  = −0.84) is observed between the ratio ARM/χ and Pb content. It suggests that Pb is related to brake/tyre wear emissions, releasing larger particles and higher Pb during slow driving – braking. Bulk χ values of road dusts per city show significant correlation with population size and mean annual NO 2 concentration on a log-normal scale. The results demonstrate the applicability of magnetic measurements of road dusts for estimation of mean NO 2 levels at high spatial density, which is important for pollution modelling and health risk assessment. - Highlights: • temporal variations of road dust magnetic susceptibility comprise 1/3 of the signal. • high negative correlation between Pb content and magnetic ratio ARM/χ is obtained. • brake- and tyre ware emissions are the main pollution sources of the road dusts. • road dust magnetic susceptibility rises parallel with logarithm of population size. • linear correlation is found between mean NO 2 concentrations and susceptibility. - Magnetic susceptibility of road dusts on a national scale increases proportionally to the population size and mean NO 2 concentrations due to the effect of traffic related pollution

  2. Detection and sizing of large-scale cracks in centrifugally cast stainless steel pipes using Lamb waves

    International Nuclear Information System (INIS)

    Ngoc, T.D.K.; Avioli, M.J. Jr.

    1988-01-01

    Application of conventional ultrasonic nondestructive evaluation (NDE) techniques to centrifugally cast stainless steel (CCSS) pipes in pressurized water reactors (PWRs) has been limited, mainly due to the anisotropy of the CCSS materials. Phenomena such as beam skewing and distortion are directly attributable to this anisotropy and cause severe difficulties in crack detection and sizing. To improve CCSS inspectability, the feasibility of using Lamb waves as the probing mechanism for detecting and characterizing a surface-breaking crack originating from the pipe interior surface is discussed. A similar research effort has been reported by Rokhlin who investigated the interaction of Lamb waves with delaminations in thin sheets. Rokhlin and Adler also reported recently on the use of Lamb waves for evaluating spot welds. The motivation for using this probing mechanism derives from the recognition that the difficulties introduced by beam skewing, beam distortion, and high attenuation are circumvented, since Lamb waves are not bulk waves, but are resonant vibrational modes of a solid plate

  3. Continental-scale transport of sediments by the Baltic Ice Stream elucidated by coupled grain size and Nd provenance analyses

    Science.gov (United States)

    Boswell, Steven M.; Toucanne, Samuel; Creyts, Timothy T.; Hemming, Sidney R.

    2018-05-01

    We introduce a methodology for determining the transport distance of subglacially comminuted and entrained sediments. We pilot this method on sediments from the terminal margin of the Baltic Ice Stream, the largest ice stream of the Fennoscandian Ice Sheet during the Last Glacial Maximum. A strong correlation (R2 = 0.83) between the εNd and latitudes of circum-Baltic river sediments enables us to use εNd as a calibrated measure of distance. The proportion of subglacially transported sediments in a sample is estimated from grain size ratios in the silt fraction (investigations of Fennoscandinavian erosion, and is consistent with rapid ice flow into the Baltic basins prior to the Last Glacial Maximum. The methodology introduced here could be used to infer the distances of glacigenic sediment transport from Late Pleistocene and earlier glaciations.

  4. Allometric scaling of population variance with mean body size is predicted from Taylor's law and density-mass allometry.

    Science.gov (United States)

    Cohen, Joel E; Xu, Meng; Schuster, William S F

    2012-09-25

    Two widely tested empirical patterns in ecology are combined here to predict how the variation of population density relates to the average body size of organisms. Taylor's law (TL) asserts that the variance of the population density of a set of populations is a power-law function of the mean population density. Density-mass allometry (DMA) asserts that the mean population density of a set of populations is a power-law function of the mean individual body mass. Combined, DMA and TL predict that the variance of the population density is a power-law function of mean individual body mass. We call this relationship "variance-mass allometry" (VMA). We confirmed the theoretically predicted power-law form and the theoretically predicted parameters of VMA, using detailed data on individual oak trees (Quercus spp.) of Black Rock Forest, Cornwall, New York. These results connect the variability of population density to the mean body mass of individuals.

  5. Comparing deep neural network and other machine learning algorithms for stroke prediction in a large-scale population-based electronic medical claims database.

    Science.gov (United States)

    Chen-Ying Hung; Wei-Chen Chen; Po-Tsun Lai; Ching-Heng Lin; Chi-Chun Lee

    2017-07-01

    Electronic medical claims (EMCs) can be used to accurately predict the occurrence of a variety of diseases, which can contribute to precise medical interventions. While there is a growing interest in the application of machine learning (ML) techniques to address clinical problems, the use of deep-learning in healthcare have just gained attention recently. Deep learning, such as deep neural network (DNN), has achieved impressive results in the areas of speech recognition, computer vision, and natural language processing in recent years. However, deep learning is often difficult to comprehend due to the complexities in its framework. Furthermore, this method has not yet been demonstrated to achieve a better performance comparing to other conventional ML algorithms in disease prediction tasks using EMCs. In this study, we utilize a large population-based EMC database of around 800,000 patients to compare DNN with three other ML approaches for predicting 5-year stroke occurrence. The result shows that DNN and gradient boosting decision tree (GBDT) can result in similarly high prediction accuracies that are better compared to logistic regression (LR) and support vector machine (SVM) approaches. Meanwhile, DNN achieves optimal results by using lesser amounts of patient data when comparing to GBDT method.

  6. Micro-scale grain-size analysis and magnetic properties of coal-fired power plant fly ash and its relevance for environmental magnetic pollution studies

    Energy Technology Data Exchange (ETDEWEB)

    Blaha, U.; Sapkota, B.; Appel, E.; Stanjek, H.; Rosler, W. [University of Tubingen, Tubingen (Germany). Inst. of Geoscience

    2008-11-15

    Two fly ash samples from a black coal-fired power plant (Bexbach, Germany) were investigated for their magnetic properties, particle structure, grain-size distribution and chemical composition. Grain-size distribution was determined on bulk samples and on magnetic extracts. Magnetic susceptibility of different grain-size fractions was analyzed with respect to the according amount of fractions, high- and low-temperature dependence of magnetic susceptibility and thermal demagnetization of IRM identified magnetite and hematite as magnetic phases. Magnetic spherules were quantitatively extracted from bulk fly ash samples and examined using SEM/EDX analysis. Particle morphology and grain-size analysis on the magnetically extracted material were studied. Individual spherule types were identified and internal structures of selected polished particles were investigated by SEM and EDX analyses. Main element contents of the internal structures which consist of 'magnetite' crystals and 'glassy' matrix were systematically determined and statistically assessed. The chemical data of the micro-scale structures in the magnetic spherules were compared with XRF data from bulk material, revealing the relative element distribution in composed magnetic spherules. Comparison of the bulk sample grain-size (0.5-300 {mu}m) and grain-size spectra from magnetic extracts (1-186.5 {mu}m) shows that strongly magnetic particles mainly occur in the fine fractions of < 63 {mu}m. This study comprises a comprehensive characterization of coal-fired power plant fly ash, using magnetic, chemical, and microscopic methods. The results can serve as reference data for a variety of environmental magnetic studies.

  7. Hematoma shape, hematoma size, Glasgow coma scale score and ICH score: which predicts the 30-day mortality better for intracerebral hematoma?

    Directory of Open Access Journals (Sweden)

    Chih-Wei Wang

    Full Text Available To investigate the performance of hematoma shape, hematoma size, Glasgow coma scale (GCS score, and intracerebral hematoma (ICH score in predicting the 30-day mortality for ICH patients. To examine the influence of the estimation error of hematoma size on the prediction of 30-day mortality.This retrospective study, approved by a local institutional review board with written informed consent waived, recruited 106 patients diagnosed as ICH by non-enhanced computed tomography study. The hemorrhagic shape, hematoma size measured by computer-assisted volumetric analysis (CAVA and estimated by ABC/2 formula, ICH score and GCS score was examined. The predicting performance of 30-day mortality of the aforementioned variables was evaluated. Statistical analysis was performed using Kolmogorov-Smirnov tests, paired t test, nonparametric test, linear regression analysis, and binary logistic regression. The receiver operating characteristics curves were plotted and areas under curve (AUC were calculated for 30-day mortality. A P value less than 0.05 was considered as statistically significant.The overall 30-day mortality rate was 15.1% of ICH patients. The hematoma shape, hematoma size, ICH score, and GCS score all significantly predict the 30-day mortality for ICH patients, with an AUC of 0.692 (P = 0.0018, 0.715 (P = 0.0008 (by ABC/2 to 0.738 (P = 0.0002 (by CAVA, 0.877 (P<0.0001 (by ABC/2 to 0.882 (P<0.0001 (by CAVA, and 0.912 (P<0.0001, respectively.Our study shows that hematoma shape, hematoma size, ICH scores and GCS score all significantly predict the 30-day mortality in an increasing order of AUC. The effect of overestimation of hematoma size by ABC/2 formula in predicting the 30-day mortality could be remedied by using ICH score.

  8. Hematoma Shape, Hematoma Size, Glasgow Coma Scale Score and ICH Score: Which Predicts the 30-Day Mortality Better for Intracerebral Hematoma?

    Science.gov (United States)

    Wang, Chih-Wei; Liu, Yi-Jui; Lee, Yi-Hsiung; Hueng, Dueng-Yuan; Fan, Hueng-Chuen; Yang, Fu-Chi; Hsueh, Chun-Jen; Kao, Hung-Wen; Juan, Chun-Jung; Hsu, Hsian-He

    2014-01-01

    Purpose To investigate the performance of hematoma shape, hematoma size, Glasgow coma scale (GCS) score, and intracerebral hematoma (ICH) score in predicting the 30-day mortality for ICH patients. To examine the influence of the estimation error of hematoma size on the prediction of 30-day mortality. Materials and Methods This retrospective study, approved by a local institutional review board with written informed consent waived, recruited 106 patients diagnosed as ICH by non-enhanced computed tomography study. The hemorrhagic shape, hematoma size measured by computer-assisted volumetric analysis (CAVA) and estimated by ABC/2 formula, ICH score and GCS score was examined. The predicting performance of 30-day mortality of the aforementioned variables was evaluated. Statistical analysis was performed using Kolmogorov-Smirnov tests, paired t test, nonparametric test, linear regression analysis, and binary logistic regression. The receiver operating characteristics curves were plotted and areas under curve (AUC) were calculated for 30-day mortality. A P value less than 0.05 was considered as statistically significant. Results The overall 30-day mortality rate was 15.1% of ICH patients. The hematoma shape, hematoma size, ICH score, and GCS score all significantly predict the 30-day mortality for ICH patients, with an AUC of 0.692 (P = 0.0018), 0.715 (P = 0.0008) (by ABC/2) to 0.738 (P = 0.0002) (by CAVA), 0.877 (Phematoma shape, hematoma size, ICH scores and GCS score all significantly predict the 30-day mortality in an increasing order of AUC. The effect of overestimation of hematoma size by ABC/2 formula in predicting the 30-day mortality could be remedied by using ICH score. PMID:25029592

  9. Virtual Vector Machine for Bayesian Online Classification

    OpenAIRE

    Minka, Thomas P.; Xiang, Rongjing; Yuan; Qi

    2012-01-01

    In a typical online learning scenario, a learner is required to process a large data stream using a small memory buffer. Such a requirement is usually in conflict with a learner's primary pursuit of prediction accuracy. To address this dilemma, we introduce a novel Bayesian online classi cation algorithm, called the Virtual Vector Machine. The virtual vector machine allows you to smoothly trade-off prediction accuracy with memory size. The virtual vector machine summarizes the information con...

  10. Sea-ice floe-size distribution in the context of spontaneous scaling emergence in stochastic systems.

    Science.gov (United States)

    Herman, Agnieszka

    2010-06-01

    Sea-ice floe-size distribution (FSD) in ice-pack covered seas influences many aspects of ocean-atmosphere interactions. However, data concerning FSD in the polar oceans are still sparse and processes shaping the observed FSD properties are poorly understood. Typically, power-law FSDs are assumed although no feasible explanation has been provided neither for this one nor for other properties of the observed distributions. Consequently, no model exists capable of predicting FSD parameters in any particular situation. Here I show that the observed FSDs can be well represented by a truncated Pareto distribution P(x)=x(-1-α) exp[(1-α)/x] , which is an emergent property of a certain group of multiplicative stochastic systems, described by the generalized Lotka-Volterra (GLV) equation. Building upon this recognition, a possibility of developing a simple agent-based GLV-type sea-ice model is considered. Contrary to simple power-law FSDs, GLV gives consistent estimates of the total floe perimeter, as well as floe-area distribution in agreement with observations.

  11. Outbreaks, gene flow and effective population size in the migratory locust, Locusta migratoria: a regional-scale comparative survey.

    Science.gov (United States)

    Chapuis, Marie-Pierre; Loiseau, Anne; Michalakis, Yannis; Lecoq, Michel; Franc, Alex; Estoup, Arnaud

    2009-03-01

    The potential effect of population outbreaks on within and between genetic variation of populations in pest species has rarely been assessed. In this study, we compare patterns of genetic variation in different sets of historically frequently outbreaking and rarely outbreaking populations of an agricultural pest of major importance, the migratory locust, Locusta migratoria. We analyse genetic variation within and between 24 populations at 14 microsatellites in Western Europe, where only ancient and low-intensity outbreaks have been reported (non-outbreaking populations), and in Madagascar and Northern China, where frequent and intense outbreak events have been recorded over the last century (outbreaking populations). Our comparative survey shows that (i) the long-term effective population size is similar in outbreaking and non-outbreaking populations, as evidenced by similar estimates of genetic diversity, and (ii) gene flow is substantially larger among outbreaking populations than among non-outbreaking populations, as evidenced by a fourfold to 30-fold difference in FST values. We discuss the implications for population dynamics and the consequences for management strategies of the observed patterns of genetic variation in L. migratoria populations with contrasting historical outbreak frequency and extent.

  12. Large superconducting conductors and joints for fusion magnets: From conceptual design to test at full size scale

    International Nuclear Information System (INIS)

    Ciazynski, D.; Duchateau, J.L.; Decool, P.; Libeyre, P.; Turck, B.

    2001-01-01

    A new kind of superconducting conductor, using the so-called cable-in-conduit concept, is emerging mainly involving fusion activity. It is to be noted that at present time no large Nb 3 Sn magnet in the world is operating using this concept. The difficulty of this technology which has now been studied for 20 years, is that it has to integrate major progresses in multiple interconnected new fields such as: large number (1000) of superconducting strands, high current conductors (50 kA), forced flow cryogenics, Nb 3 Sn technology, low loss conductors in pulsed operation, high current connections, high voltage insulation (10 kV), economical and industrial feasibility. CEA was very involved during these last 10 years in this development which took place in the frame of the NET and ITER technological programs. One major milestone was reached in 1998-1999 with the successful tests by our Association of three full size conductor and connection samples in the Sultan facility (Villigen, Switzerland). (author)

  13. Sea-ice floe-size distribution in the context of spontaneous scaling emergence in stochastic systems

    Science.gov (United States)

    Herman, Agnieszka

    2010-06-01

    Sea-ice floe-size distribution (FSD) in ice-pack covered seas influences many aspects of ocean-atmosphere interactions. However, data concerning FSD in the polar oceans are still sparse and processes shaping the observed FSD properties are poorly understood. Typically, power-law FSDs are assumed although no feasible explanation has been provided neither for this one nor for other properties of the observed distributions. Consequently, no model exists capable of predicting FSD parameters in any particular situation. Here I show that the observed FSDs can be well represented by a truncated Pareto distribution P(x)=x-1-αexp[(1-α)/x] , which is an emergent property of a certain group of multiplicative stochastic systems, described by the generalized Lotka-Volterra (GLV) equation. Building upon this recognition, a possibility of developing a simple agent-based GLV-type sea-ice model is considered. Contrary to simple power-law FSDs, GLV gives consistent estimates of the total floe perimeter, as well as floe-area distribution in agreement with observations.

  14. Analysis of machining and machine tools

    CERN Document Server

    Liang, Steven Y

    2016-01-01

    This book delivers the fundamental science and mechanics of machining and machine tools by presenting systematic and quantitative knowledge in the form of process mechanics and physics. It gives readers a solid command of machining science and engineering, and familiarizes them with the geometry and functionality requirements of creating parts and components in today’s markets. The authors address traditional machining topics, such as: single and multiple point cutting processes grinding components accuracy and metrology shear stress in cutting cutting temperature and analysis chatter They also address non-traditional machining, such as: electrical discharge machining electrochemical machining laser and electron beam machining A chapter on biomedical machining is also included. This book is appropriate for advanced undergraduate and graduate mechani cal engineering students, manufacturing engineers, and researchers. Each chapter contains examples, exercises and their solutions, and homework problems that re...

  15. Machine Protection

    International Nuclear Information System (INIS)

    Schmidt, R

    2014-01-01

    The protection of accelerator equipment is as old as accelerator technology and was for many years related to high-power equipment. Examples are the protection of powering equipment from overheating (magnets, power converters, high-current cables), of superconducting magnets from damage after a quench and of klystrons. The protection of equipment from beam accidents is more recent. It is related to the increasing beam power of high-power proton accelerators such as ISIS, SNS, ESS and the PSI cyclotron, to the emission of synchrotron light by electron–positron accelerators and FELs, and to the increase of energy stored in the beam (in particular for hadron colliders such as LHC). Designing a machine protection system requires an excellent understanding of accelerator physics and operation to anticipate possible failures that could lead to damage. Machine protection includes beam and equipment monitoring, a system to safely stop beam operation (e.g. dumping the beam or stopping the beam at low energy) and an interlock system providing the glue between these systems. The most recent accelerator, the LHC, will operate with about 3 × 10 14 protons per beam, corresponding to an energy stored in each beam of 360 MJ. This energy can cause massive damage to accelerator equipment in case of uncontrolled beam loss, and a single accident damaging vital parts of the accelerator could interrupt operation for years. This article provides an overview of the requirements for protection of accelerator equipment and introduces the various protection systems. Examples are mainly from LHC, SNS and ESS

  16. Scaling down the size and increasing the throughput of glycosyltransferase assays: activity changes on stem cell differentiation.

    Science.gov (United States)

    Patil, Shilpa A; Chandrasekaran, E V; Matta, Khushi L; Parikh, Abhirath; Tzanakakis, Emmanuel S; Neelamegham, Sriram

    2012-06-15

    Glycosyltransferases (glycoTs) catalyze the transfer of monosaccharides from nucleotide-sugars to carbohydrate-, lipid-, and protein-based acceptors. We examined strategies to scale down and increase the throughput of glycoT enzymatic assays because traditional methods require large reaction volumes and complex chromatography. Approaches tested used (i) microarray pin printing, an appropriate method when glycoT activity was high; (ii) microwells and microcentrifuge tubes, a suitable method for studies with cell lysates when enzyme activity was moderate; and (iii) C(18) pipette tips and solvent extraction, a method that enriched reaction product when the extent of reaction was low. In all cases, reverse-phase thin layer chromatography (RP-TLC) coupled with phosphorimaging quantified the reaction rate. Studies with mouse embryonic stem cells (mESCs) demonstrated an increase in overall β(1,3)galactosyltransferase and α(2,3)sialyltransferase activity and a decrease in α(1,3)fucosyltransferases when these cells differentiate toward cardiomyocytes. Enzymatic and lectin binding data suggest a transition from Lewis(x)-type structures in mESCs to sialylated Galβ1,3GalNAc-type glycans on differentiation, with more prominent changes in enzyme activity occurring at later stages when embryoid bodies differentiated toward cardiomyocytes. Overall, simple, rapid, quantitative, and scalable glycoT activity analysis methods are presented. These use a range of natural and synthetic acceptors for the analysis of complex biological specimens that have limited availability. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Power-Law Scaling of the Impact Crater Size-Frequency Distribution on Pluto: A Preliminary Analysis Based on First Images from New Horizons' Flyby

    Directory of Open Access Journals (Sweden)

    Scholkmann F.

    2016-01-01

    Full Text Available The recent (14 th July 2015 flyby of NASA’s New Horizons spacecraft of the dwarf planet Pluto resulted in the first high-resolution images of the geological surface- features of Pluto. Since previous studies showed that the impact crater size-frequency distribution (SFD of different celestial objects of our solar system follows power-laws, the aim of the present analysis was to determine, for the first time, the power-law scaling behavior for Pluto’s crater SFD based on the first images available in mid-September 2015. The analysis was based on a high-resolution image covering parts of Pluto’s re- gions Sputnik Planum , Al-Idrisi Montes and Voyager Terra . 83 impact craters could be identified in these regions and their diameter ( D was determined. The analysis re- vealed that the crater diameter SFD shows a statistically significant power-law scaling ( α = 2.4926±0.3309 in the interval of D values ranging from 3.75±1.14 km to the largest determined D value in this data set of 37.77 km. The value obtained for the scaling coefficient α is similar to the coefficient determined for the power-law scaling of the crater SFDs from the other celestial objects in our solar system. Further analysis of Pluto’s crater SFD is warranted as soon as new images are received from the spacecraft.

  18. Scale-up of the electrokinetic fence technology for the removal of pesticides. Part II: Does size matter for removal of herbicides?

    Science.gov (United States)

    López-Vizcaíno, R; Risco, C; Isidro, J; Rodrigo, S; Saez, C; Cañizares, P; Navarro, V; Rodrigo, M A

    2017-01-01

    This work reports results of the application of electrokinetic fence technology in a 32 m 3 -prototype which contains soil polluted with 2,4-D and oxyfluorfen, focusing on the evaluation of the mechanisms that describe the removal of these two herbicides and comparing results to those obtained in smaller plants: a pilot-scale mockup (175 L) and a lab-scale soil column (1 L). Results show that electric heating of soil (coupled with the increase in the volatility) is the key to explain the removal of pollutants in the largest scale facility while electrokinetic transport processes are the primary mechanisms that explain the removal of herbicides in the lab-scale plant. 2-D and 3-D maps of the temperature and pollutant concentrations are used in the discussion of results trying to give light about the mechanisms and about how the size of the setup can lead to different conclusions, despite the same processes are occurring in the soil. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Assessing the impact of large-scale computing on the size and complexity of first-principles electromagnetic models

    International Nuclear Information System (INIS)

    Miller, E.K.

    1990-01-01

    There is a growing need to determine the electromagnetic performance of increasingly complex systems at ever higher frequencies. The ideal approach would be some appropriate combination of measurement, analysis, and computation so that system design and assessment can be achieved to a needed degree of accuracy at some acceptable cost. Both measurement and computation benefit from the continuing growth in computer power that, since the early 1950s, has increased by a factor of more than a million in speed and storage. For example, a CRAY2 has an effective throughput (not the clock rate) of about 10 11 floating-point operations (FLOPs) per hour compared with the approximate 10 5 provided by the UNIVAC-1. The purpose of this discussion is to illustrate the computational complexity of modeling large (in wavelengths) electromagnetic problems. In particular the author makes the point that simply relying on faster computers for increasing the size and complexity of problems that can be modeled is less effective than might be anticipated from this raw increase in computer throughput. He suggests that rather than depending on faster computers alone, various analytical and numerical alternatives need development for reducing the overall FLOP count required to acquire the information desired. One approach is to decrease the operation count of the basic model computation itself, by reducing the order of the frequency dependence of the various numerical operations or their multiplying coefficients. Another is to decrease the number of model evaluations that are needed, an example being the number of frequency samples required to define a wideband response, by using an auxiliary model of the expected behavior. 11 refs., 5 figs., 2 tabs

  20. Machine terms dictionary

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1979-04-15

    This book gives descriptions of machine terms which includes machine design, drawing, the method of machine, machine tools, machine materials, automobile, measuring and controlling, electricity, basic of electron, information technology, quality assurance, Auto CAD and FA terms and important formula of mechanical engineering.

  1. Household Size and the Decision to Purchase Health Insurance in Cambodia: Results of a Discrete-Choice Experiment with Scale Adjustment.

    Science.gov (United States)

    Ozawa, Sachiko; Grewal, Simrun; Bridges, John F P

    2016-04-01

    Community-based health insurance (CBHI) schemes have been introduced in low- and middle-income countries to increase health service utilization and provide financial protection from high healthcare expenditures. We assess the impact of household size on decisions to enroll in CBHI and demonstrate how to correct for group disparity in scale (i.e. variance differences). A discrete choice experiment was conducted across five CBHI attributes. Preferences were elicited through forced-choice paired comparison choice tasks designed based on D-efficiency. Differences in preferences were examined between small (1-4 family members) and large (5-12 members) households using conditional logistic regression. Swait and Louviere test was used to identify and correct for differences in scale. One-hundred and sixty households were surveyed in Northwest Cambodia. Increased insurance premium was associated with disutility [odds ratio (OR) 0.61, p decisions regardless of household size. Understanding how community members make decisions about health insurance can inform low- and middle-income countries' paths towards universal health coverage.

  2. Catch and size selectivity of small-scale fishing gear for the smooth-hound shark Mustelus mustelus (Linnaeus, 1758 (Chondrichthyes: Triakidae from the Aegean Turkish coast

    Directory of Open Access Journals (Sweden)

    T. CEYHAN

    2010-10-01

    Full Text Available Catch rate, CPUE, biomass ratios and size selectivity from traditional longline and trammel nets of Turkish coastal small-scale fisheries were investigated in order to describe the Smooth-hound shark (Mustelus mustelus fishery. The SELECT method was used to estimate the selectivity parameters of a variety of models for the trammel nets inner panel of 150 and 170 mm mesh sizes. Catch composition and proportion of the species were significantly different in longline and trammel nets. While mean CPUE of longline was 119.2±14.3 kg/1000 hooks, these values for 150 and 170 mm trammel nets were 5.3±1.2 kg/1000 m of net and 12.7±3.9 kg/1000 m of net, respectively. Biomass ratios of the by catch to Smooth-hound catch were found to be 1:0.32 for 150 mm trammel net, 1:0.65 for longline and 1:0.73 for 170 mm trammel net. The estimated modal lengths and spreads were found to be 91.1 and 16.2 cm for 150 mm and 103.2 and 18.4 cm for 170 mm, respectively. The modal lengths of the species as well as the spread values increased with mesh size.

  3. Design and deployment strategies for small and medium sized reactors (SMRs) to overcome loss of economies of scale and incorporate increased proliferation resistance

    International Nuclear Information System (INIS)

    Kuznetsov, V.

    2007-01-01

    The designers of innovative small and medium sized reactors pursue new design and deployment strategies making use of certain advantages provided by smaller reactor size and capacity to achieve reduced design complexity and simplified operation and maintenance requirements, and to provide for incremental capacity increase through multi-module plant clustering. Competitiveness of SMRs (Small and Medium size Reactor) depends on the incorporated strategies to overcome loss of economies of scale but equally it depends on finding appropriate market niches for such reactors. For many less developed countries, these are the features of enhanced proliferation resistance and increased robustness of barriers for sabotage protection that may ensure the progress of nuclear power. For such countries, small reactors without on-site refuelling, designed for infrequent replacement of well-contained fuel cassette(s) in a manner that impedes clandestine diversion of nuclear fuel material, may provide a solution. Based on the outputs of recent IAEA activities for innovative SMRs, the paper provides a summary of the state-of-the-art in approaches to improve SMR competitiveness and incorporate enhanced proliferation resistance and energy security. (author)

  4. The minimum or natural rate of flow and droplet size ejected by Taylor cone–jets: physical symmetries and scaling laws

    International Nuclear Information System (INIS)

    Gañán-Calvo, A M; Rebollo-Muñoz, N; Montanero, J M

    2013-01-01

    We aim to establish the scaling laws for both the minimum rate of flow attainable in the steady cone–jet mode of electrospray, and the size of the resulting droplets in that limit. Use is made of a small body of literature on Taylor cone–jets reporting precise measurements of the transported electric current and droplet size as a function of the liquid properties and flow rate. The projection of the data onto an appropriate non-dimensional parameter space maps a region bounded by the minimum rate of flow attainable in the steady state. To explain these experimental results, we propose a theoretical model based on the generalized concept of physical symmetry, stemming from the system time invariance (steadiness). A group of symmetries rising at the cone-to-jet geometrical transition determines the scaling for the minimum flow rate and related variables. If the flow rate is decreased below that minimum value, those symmetries break down, which leads to dripping. We find that the system exhibits two instability mechanisms depending on the nature of the forces arising against the flow: one dominated by viscosity and the other by the liquid polarity. In the former case, full charge relaxation is guaranteed down to the minimum flow rate, while in the latter the instability condition becomes equivalent to the symmetry breakdown by charge relaxation or separation. When cone–jets are formed without artificially imposing a flow rate, a microjet is issued quasi-steadily. The flow rate naturally ejected this way coincides with the minimum flow rate studied here. This natural flow rate determines the minimum droplet size that can be steadily produced by any electrohydrodynamic means for a given set of liquid properties. (paper)

  5. The minimum or natural rate of flow and droplet size ejected by Taylor cone-jets: physical symmetries and scaling laws

    Science.gov (United States)

    Gañán-Calvo, A. M.; Rebollo-Muñoz, N.; Montanero, J. M.

    2013-03-01

    We aim to establish the scaling laws for both the minimum rate of flow attainable in the steady cone-jet mode of electrospray, and the size of the resulting droplets in that limit. Use is made of a small body of literature on Taylor cone-jets reporting precise measurements of the transported electric current and droplet size as a function of the liquid properties and flow rate. The projection of the data onto an appropriate non-dimensional parameter space maps a region bounded by the minimum rate of flow attainable in the steady state. To explain these experimental results, we propose a theoretical model based on the generalized concept of physical symmetry, stemming from the system time invariance (steadiness). A group of symmetries rising at the cone-to-jet geometrical transition determines the scaling for the minimum flow rate and related variables. If the flow rate is decreased below that minimum value, those symmetries break down, which leads to dripping. We find that the system exhibits two instability mechanisms depending on the nature of the forces arising against the flow: one dominated by viscosity and the other by the liquid polarity. In the former case, full charge relaxation is guaranteed down to the minimum flow rate, while in the latter the instability condition becomes equivalent to the symmetry breakdown by charge relaxation or separation. When cone-jets are formed without artificially imposing a flow rate, a microjet is issued quasi-steadily. The flow rate naturally ejected this way coincides with the minimum flow rate studied here. This natural flow rate determines the minimum droplet size that can be steadily produced by any electrohydrodynamic means for a given set of liquid properties.

  6. Addiction Machines

    Directory of Open Access Journals (Sweden)

    James Godley

    2011-10-01

    Full Text Available Entry into the crypt William Burroughs shared with his mother opened and shut around a failed re-enactment of William Tell’s shot through the prop placed upon a loved one’s head. The accidental killing of his wife Joan completed the installation of the addictation machine that spun melancholia as manic dissemination. An early encryptment to which was added the audio portion of abuse deposited an undeliverable message in WB. Wil- liam could never tell, although his corpus bears the in- scription of this impossibility as another form of pos- sibility. James Godley is currently a doctoral candidate in Eng- lish at SUNY Buffalo, where he studies psychoanalysis, Continental philosophy, and nineteenth-century litera- ture and poetry (British and American. His work on the concept of mourning and “the dead” in Freudian and Lacanian approaches to psychoanalytic thought and in Gothic literature has also spawned an essay on zombie porn. Since entering the Academy of Fine Arts Karlsruhe in 2007, Valentin Hennig has studied in the classes of Sil- via Bächli, Claudio Moser, and Corinne Wasmuht. In 2010 he spent a semester at the Dresden Academy of Fine Arts. His work has been shown in group exhibi- tions in Freiburg and Karlsruhe.

  7. Machine musicianship

    Science.gov (United States)

    Rowe, Robert

    2002-05-01

    The training of musicians begins by teaching basic musical concepts, a collection of knowledge commonly known as musicianship. Computer programs designed to implement musical skills (e.g., to make sense of what they hear, perform music expressively, or compose convincing pieces) can similarly benefit from access to a fundamental level of musicianship. Recent research in music cognition, artificial intelligence, and music theory has produced a repertoire of techniques that can make the behavior of computer programs more musical. Many of these were presented in a recently published book/CD-ROM entitled Machine Musicianship. For use in interactive music systems, we are interested in those which are fast enough to run in real time and that need only make reference to the material as it appears in sequence. This talk will review several applications that are able to identify the tonal center of musical material during performance. Beyond this specific task, the design of real-time algorithmic listening through the concurrent operation of several connected analyzers is examined. The presentation includes discussion of a library of C++ objects that can be combined to perform interactive listening and a demonstration of their capability.

  8. Benford analysis of quantum critical phenomena: First digit provides high finite-size scaling exponent while first two and further are not much better

    Science.gov (United States)

    Bera, Anindita; Mishra, Utkarsh; Singha Roy, Sudipto; Biswas, Anindya; Sen(De), Aditi; Sen, Ujjwal

    2018-06-01

    Benford's law is an empirical edict stating that the lower digits appear more often than higher ones as the first few significant digits in statistics of natural phenomena and mathematical tables. A marked proportion of such analyses is restricted to the first significant digit. We employ violation of Benford's law, up to the first four significant digits, for investigating magnetization and correlation data of paradigmatic quantum many-body systems to detect cooperative phenomena, focusing on the finite-size scaling exponents thereof. We find that for the transverse field quantum XY model, behavior of the very first significant digit of an observable, at an arbitrary point of the parameter space, is enough to capture the quantum phase transition in the model with a relatively high scaling exponent. A higher number of significant digits do not provide an appreciable further advantage, in particular, in terms of an increase in scaling exponents. Since the first significant digit of a physical quantity is relatively simple to obtain in experiments, the results have potential implications for laboratory observations in noisy environments.

  9. Size variation and collapse of emphysema holes at inspiration and expiration CT scan: evaluation with modified length scale method and image co-registration

    Directory of Open Access Journals (Sweden)

    Oh SY

    2017-07-01

    Full Text Available Sang Young Oh,1,* Minho Lee,1,* Joon Beom Seo,1,* Namkug Kim,1,2,* Sang Min Lee,1 Jae Seung Lee,3 Yeon Mok Oh3 1Department of Radiology, 2Department of Convergence Medicine, 3Department of Pulmonology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea *These authors contributed equally to this work Abstract: A novel approach of size-based emphysema clustering has been developed, and the size variation and collapse of holes in emphysema clusters are evaluated at inspiratory and expiratory computed tomography (CT. Thirty patients were visually evaluated for the size-based emphysema clustering technique and a total of 72 patients were evaluated for analyzing collapse of the emphysema hole in this study. A new approach for the size differentiation of emphysema holes was developed using the length scale, Gaussian low-pass filtering, and iteration approach. Then, the volumetric CT results of the emphysema patients were analyzed using the new method, and deformable registration was carried out between inspiratory and expiratory CT. Blind visual evaluations of EI by two readers had significant correlations with the classification using the size-based emphysema clustering method (r-values of reader 1: 0.186, 0.890, 0.915, and 0.941; reader 2: 0.540, 0.667, 0.919, and 0.942. The results of collapse of emphysema holes using deformable registration were compared with the pulmonary function test (PFT parameters using the Pearson’s correlation test. The mean extents of low-attenuation area (LAA, E1 (<1.5 mm, E2 (<7 mm, E3 (<15 mm, and E4 (≥15 mm were 25.9%, 3.0%, 11.4%, 7.6%, and 3.9%, respectively, at the inspiratory CT, and 15.3%, 1.4%, 6.9%, 4.3%, and 2.6%, respectively at the expiratory CT. The extents of LAA, E2, E3, and E4 were found to be significantly correlated with the PFT ­parameters (r=−0.53, −0.43, −0.48, and −0.25, with forced expiratory volume in 1 second (FEV1; −0.81, −0.62, −0.75, and

  10. Pythagorean Means and Carnot Machines

    Indian Academy of Sciences (India)

    When Music Meets Heat. Ramandeep S Johal ... found their use in representing ratios on a musical scale (see Box. 1). There are ... two legs of a journey, spending equal time t in each of them, .... a pump between Ti and Tc, and require that Rhi = Pic. This ... We can make similar statements, when the machines in case are.

  11. Gaussian processes for machine learning.

    Science.gov (United States)

    Seeger, Matthias

    2004-04-01

    Gaussian processes (GPs) are natural generalisations of multivariate Gaussian random variables to infinite (countably or continuous) index sets. GPs have been applied in a large number of fields to a diverse range of ends, and very many deep theoretical analyses of various properties are available. This paper gives an introduction to Gaussian processes on a fairly elementary level with special emphasis on characteristics relevant in machine learning. It draws explicit connections to branches such as spline smoothing models and support vector machines in which similar ideas have been investigated. Gaussian process models are routinely used to solve hard machine learning problems. They are attractive because of their flexible non-parametric nature and computational simplicity. Treated within a Bayesian framework, very powerful statistical methods can be implemented which offer valid estimates of uncertainties in our predictions and generic model selection procedures cast as nonlinear optimization problems. Their main drawback of heavy computational scaling has recently been alleviated by the introduction of generic sparse approximations.13,78,31 The mathematical literature on GPs is large and often uses deep concepts which are not required to fully understand most machine learning applications. In this tutorial paper, we aim to present characteristics of GPs relevant to machine learning and to show up precise connections to other "kernel machines" popular in the community. Our focus is on a simple presentation, but references to more detailed sources are provided.

  12. Model-based machine learning.

    Science.gov (United States)

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.

  13. GA-4 half-scale cask model fabrication

    International Nuclear Information System (INIS)

    Meyer, R.J.

    1995-01-01

    Unique fabrication experience was gained during the construction of a half-scale model of the GA-4 Legal Weight Truck Cask. Techniques were developed for forming, welding, and machining XM-19 stainless steel. Noncircular 'rings' of depleted uranium were cast and machined to close tolerances. The noncircular cask body, gamma shield, and cavity liner were produced using a nonconventional approach in which components were first machined to final size and then welded together using a low-distortion electron beam process. Special processes were developed for fabricating the bonded aluminum honeycomb impact limiters. The innovative design of the cask internals required precision deep hole drilling, low-distortion welding, and close tolerance machining. Valuable lessons learned were documented for use in future manufacturing of full-scale prototype and production units

  14. Remote Machining and Evaluation of Explosively Filled Munitions

    Data.gov (United States)

    Federal Laboratory Consortium — This facility is used for remote machining of explosively loaded ammunition. Munition sizes from small arms through 8-inch artillery can be accommodated. Sectioning,...

  15. Machinability of IPS Empress 2 framework ceramic.

    Science.gov (United States)

    Schmidt, C; Weigl, P

    2000-01-01

    Using ceramic materials for an automatic production of ceramic dentures by CAD/CAM is a challenge, because many technological, medical, and optical demands must be considered. The IPS Empress 2 framework ceramic meets most of them. This study shows the possibilities for machining this ceramic with economical parameters. The long life-time requirement for ceramic dentures requires a ductile machined surface to avoid the well-known subsurface damages of brittle materials caused by machining. Slow and rapid damage propagation begins at break outs and cracks, and limits life-time significantly. Therefore, ductile machined surfaces are an important demand for machine dental ceramics. The machining tests were performed with various parameters such as tool grain size and feed speed. Denture ceramics were machined by jig grinding on a 5-axis CNC milling machine (Maho HGF 500) with a high-speed spindle up to 120,000 rpm. The results of the wear test indicate low tool wear. With one tool, you can machine eight occlusal surfaces including roughing and finishing. One occlusal surface takes about 60 min machining time. Recommended parameters for roughing are middle diamond grain size (D107), cutting speed v(c) = 4.7 m/s, feed speed v(ft) = 1000 mm/min, depth of cut a(e) = 0.06 mm, width of contact a(p) = 0.8 mm, and for finishing ultra fine diamond grain size (D46), cutting speed v(c) = 4.7 m/s, feed speed v(ft) = 100 mm/min, depth of cut a(e) = 0.02 mm, width of contact a(p) = 0.8 mm. The results of the machining tests give a reference for using IPS Empress(R) 2 framework ceramic in CAD/CAM systems. Copyright 2000 John Wiley & Sons, Inc.

  16. A Study of the Resolution of Dental Intraoral X-Ray Machines

    International Nuclear Information System (INIS)

    Kim, Seon Ju; Chung, Hyon De

    1990-01-01

    The purpose of this study was to assess the resolution and focal spot size of dental X-ray machines. Fifty dental X-ray machines were selected for measuring resolution and focal spot size. These machines were used in general dental clinics. The time on installation of the X-ray machine varies from 1 years to 10 years. The resolution of these machines was measured with the test pattern. The focal spot size of these machines was measured with the star test pattern. The following results were obtained: 1. The resolution of dental intraoral X-ray machines was not significantly changed in ten years. 2. The focal spot size of dental intraoral X-ray machines was not significantly increased in ten years. The statistical analysis between the mean focal spot size and nominal focal spot size was significant at the 0.05 level about the more than 3 years used machines.

  17. Engineering and Scaling the Spontaneous Magnetization Reversal of Faraday Induced Magnetic Relaxation in Nano-Sized Amorphous Ni Coated on Crystalline Au.

    Science.gov (United States)

    Li, Wen-Hsien; Lee, Chi-Hung; Kuo, Chen-Chen

    2016-05-28

    We report on the generation of large inverse remanent magnetizations in nano-sized core/shell structure of Au/Ni by turning off the applied magnetic field. The remanent magnetization is very sensitive to the field reduction rate as well as to the thermal and field processes before the switching off of the magnetic field. Spontaneous reversal in direction and increase in magnitude of the remanent magnetization in subsequent relaxations over time were found. All of the various types of temporal relaxation curves of the remanent magnetizations are successfully scaled by a stretched exponential decay profile, characterized by two pairs of relaxation times and dynamic exponents. The relaxation time is used to describe the reduction rate, while the dynamic exponent describes the dynamical slowing down of the relaxation through time evolution. The key to these effects is to have the induced eddy current running beneath the amorphous Ni shells through Faraday induction.

  18. Engineering and Scaling the Spontaneous Magnetization Reversal of Faraday Induced Magnetic Relaxation in Nano-Sized Amorphous Ni Coated on Crystalline Au

    Science.gov (United States)

    Li, Wen-Hsien; Lee, Chi-Hung; Kuo, Chen-Chen

    2016-01-01

    We report on the generation of large inverse remanent magnetizations in nano-sized core/shell structure of Au/Ni by turning off the applied magnetic field. The remanent magnetization is very sensitive to the field reduction rate as well as to the thermal and field processes before the switching off of the magnetic field. Spontaneous reversal in direction and increase in magnitude of the remanent magnetization in subsequent relaxations over time were found. All of the various types of temporal relaxation curves of the remanent magnetizations are successfully scaled by a stretched exponential decay profile, characterized by two pairs of relaxation times and dynamic exponents. The relaxation time is used to describe the reduction rate, while the dynamic exponent describes the dynamical slowing down of the relaxation through time evolution. The key to these effects is to have the induced eddy current running beneath the amorphous Ni shells through Faraday induction. PMID:28773549

  19. Output Enhancement in the Transfer-Field Machine Using Rotor ...

    African Journals Online (AJOL)

    Output Enhancement in the Transfer-Field Machine Using Rotor Circuit Induced Currents. ... The output of a plain transfer-field machine would be much less than that of a conventional machine of comparable size and dimensions. The use of ... The same effects have their parallel for the asynchronous mode of operation.

  20. Does lake size matter? Combining morphology and process modeling to examine the contribution of lake classes to population-scale processes

    Science.gov (United States)

    Winslow, Luke A.; Read, Jordan S.; Hanson, Paul C.; Stanley, Emily H.

    2014-01-01

    With lake abundances in the thousands to millions, creating an intuitive understanding of the distribution of morphology and processes in lakes is challenging. To improve researchers’ understanding of large-scale lake processes, we developed a parsimonious mathematical model based on the Pareto distribution to describe the distribution of lake morphology (area, perimeter and volume). While debate continues over which mathematical representation best fits any one distribution of lake morphometric characteristics, we recognize the need for a simple, flexible model to advance understanding of how the interaction between morphometry and function dictates scaling across large populations of lakes. These models make clear the relative contribution of lakes to the total amount of lake surface area, volume, and perimeter. They also highlight the critical thresholds at which total perimeter, area and volume would be evenly distributed across lake size-classes have Pareto slopes of 0.63, 1 and 1.12, respectively. These models of morphology can be used in combination with models of process to create overarching “lake population” level models of process. To illustrate this potential, we combine the model of surface area distribution with a model of carbon mass accumulation rate. We found that even if smaller lakes contribute relatively less to total surface area than larger lakes, the increasing carbon accumulation rate with decreasing lake size is strong enough to bias the distribution of carbon mass accumulation towards smaller lakes. This analytical framework provides a relatively simple approach to upscaling morphology and process that is easily generalizable to other ecosystem processes.

  1. Machine Shop Lathes.

    Science.gov (United States)

    Dunn, James

    This guide, the second in a series of five machine shop curriculum manuals, was designed for use in machine shop courses in Oklahoma. The purpose of the manual is to equip students with basic knowledge and skills that will enable them to enter the machine trade at the machine-operator level. The curriculum is designed so that it can be used in…

  2. Superconducting rotating machines

    International Nuclear Information System (INIS)

    Smith, J.L. Jr.; Kirtley, J.L. Jr.; Thullen, P.

    1975-01-01

    The opportunities and limitations of the applications of superconductors in rotating electric machines are given. The relevant properties of superconductors and the fundamental requirements for rotating electric machines are discussed. The current state-of-the-art of superconducting machines is reviewed. Key problems, future developments and the long range potential of superconducting machines are assessed

  3. Options for small and medium sized reactors (SMRs) to overcome loss of economies of scale and incorporate increased proliferation resistance and energy security

    International Nuclear Information System (INIS)

    Kuznetsov, Vladimir

    2008-01-01

    The designers of innovative small and medium sized reactors pursue new design and deployment strategies making use of certain advantages provided by smaller reactor size and capacity to achieve reduced design complexity and simplified operation and maintenance requirements, and to provide for incremental capacity increase through multi-module plant clustering. Competitiveness of SMRs depends on the incorporated strategies to overcome loss of economies of scale but equally it depends on finding appropriate market niches for such reactors. For many less developed countries, these are the features of enhanced proliferation resistance and increased robustness of barriers for sabotage protection that may ensure the progress of nuclear power. For such countries, small reactors without on-site refuelling, designed for infrequent replacement of well-contained fuel cassette(s) in a manner that impedes clandestine diversion of nuclear fuel material, may provide a solution. Based on the outputs of recent IAEA activities for innovative SMRs, the paper provides a summary of the state-of-the-art in approaches to improve SMR competitiveness and incorporate enhanced proliferation resistance and energy security. (author)

  4. Machinability of a Stainless Steel by Electrochemical Discharge Microdrilling

    International Nuclear Information System (INIS)

    Coteata, Margareta; Pop, Nicolae; Slatineanu, Laurentiu; Schulze, Hans-Peter; Besliu, Irina

    2011-01-01

    Due to the chemical elements included in their structure for ensuring an increased resistance to the environment action, the stainless steels are characterized by a low machinability when classical machining methods are applied. For this reason, sometimes non-traditional machining methods are applied, one of these being the electrochemical discharge machining. To obtain microholes and to evaluate the machinability by electrochemical discharge microdrilling, test pieces of stainless steel were used for experimental research. The electrolyte was an aqueous solution of sodium silicate with different densities. A complete factorial plan was designed to highlight the influence of some input variables on the sizes of the considered machinability indexes (electrode tool wear, material removal rate, depth of the machined hole). By mathematically processing of experimental data, empirical functions were established both for stainless steel and carbon steel. Graphical representations were used to obtain more suggestive vision concerning the influence exerted by the considered input variables on the size of the machinability indexes.

  5. Brain scaling in mammalian evolution as a consequence of concerted and mosaic changes in numbers of neurons and average neuronal cell size

    Directory of Open Access Journals (Sweden)

    Suzana eHerculano-Houzel

    2014-08-01

    Full Text Available Enough species have now been subject to systematic quantitative analysis of the relationship between the morphology and cellular composition of their brain that patterns begin to emerge and shed light on the evolutionary path that led to mammalian brain diversity. Based on an analysis of the shared and clade-specific characteristics of 41 modern mammalian species in 6 clades, and in light of the phylogenetic relationships among them, here we propose that ancestral mammal brains were composed and scaled in their cellular composition like modern afrotherian and glire brains: with an addition of neurons that is accompanied by a decrease in neuronal density and very little modification in glial cell density, implying a significant increase in average neuronal cell size in larger brains, and the allocation of approximately 2 neurons in the cerebral cortex and 8 neurons in the cerebellum for every neuron allocated to the rest of brain. We also propose that in some clades the scaling of different brain structures has diverged away from the common ancestral layout through clade-specific (or clade-defining changes in how average neuronal cell mass relates to numbers of neurons in each structure, and how numbers of neurons are differentially allocated to each structure relative to the number of neurons in the rest of brain. Thus, the evolutionary expansion of mammalian brains has involved both concerted and mosaic patterns of scaling across structures. This is, to our knowledge, the first mechanistic model that explains the generation of brains large and small in mammalian evolution, and it opens up new horizons for seeking the cellular pathways and genes involved in brain evolution.

  6. The scale effect on soil erosion. A plot approach to understand connectivity on slopes under cultivation at variable plot sizes and under Mediterranean climatic conditions

    Science.gov (United States)

    Cerdà, Artemi; Bagarello, Vicenzo; Ferro, Vito; Iovino, Massimo; Borja, Manuel Estaban Lucas; Francisco Martínez Murillo, Juan; González Camarena, Rafael

    2017-04-01

    It is well known that soil erosion changes along time and seasons and attention was paid to this issue in the past (González Hidalgo et al., 2010; 2012). However, although the scientific community knows that soil erosion is also a time spatial scale-scale dependent process (Parsons et al., 1990; Cerdà et al., 2009; González Hidalgo et al., 2013; Sadeghi et al., 2015) very little is done on this topic. This is due to the fact that at different scales, different soil erosion mechanisms (splash, sheetflow, rill development) are active and their rates change with the scale of measurement (Wainwright et al., 2002; López-Vicente et al., 2015). This is making the research on soil erosion complex and difficult, and it is necessary to develop a conceptual framework but also measurements that will inform about the soil erosion behaviour. Connectivity is the key concept to understand how changes in the scale results in different rates of soil and water losses (Parsons et al., 1996; Parsons et al., 2015; Poeppl et al., 2016). Most of the research developed around the connectivity concept was applied in watershed or basin scales (Galdino et al., 2016; Martínez-Casasnovas et al., 2016; López Vicente et al., 2016; Marchamalo et al., 2015; Masselink et al., 2016), but very little is known about the connectivity issue at slope scale (Cerdà and Jurgensen, 2011). El Teularet (Eastern Iberian Peninsula) and Sparacia (Sicily) soil erosion experimental stations are being active for 15 years and data collected on different plots sizes can shed light into the effect of scale on runoff generation and soil losses at different scales and give information to understand how the transport of materials is determined by the connectivity between pedon to slope scale (Cerdà et al., 2014; Bagarello et al., 2015a; 2015b). The comparison of the results of the two research stations will shed light into the rates of soil erosion and mechanisms involved that act under different scales. Our

  7. Amp: A modular approach to machine learning in atomistic simulations

    Science.gov (United States)

    Khorshidi, Alireza; Peterson, Andrew A.

    2016-10-01

    Electronic structure calculations, such as those employing Kohn-Sham density functional theory or ab initio wavefunction theories, have allowed for atomistic-level understandings of a wide variety of phenomena and properties of matter at small scales. However, the computational cost of electronic structure methods drastically increases with length and time scales, which makes these methods difficult for long time-scale molecular dynamics simulations or large-sized systems. Machine-learning techniques can provide accurate potentials that can match the quality of electronic structure calculations, provided sufficient training data. These potentials can then be used to rapidly simulate large and long time-scale phenomena at similar quality to the parent electronic structure approach. Machine-learning potentials usually take a bias-free mathematical form and can be readily developed for a wide variety of systems. Electronic structure calculations have favorable properties-namely that they are noiseless and targeted training data can be produced on-demand-that make them particularly well-suited for machine learning. This paper discusses our modular approach to atomistic machine learning through the development of the open-source Atomistic Machine-learning Package (Amp), which allows for representations of both the total and atom-centered potential energy surface, in both periodic and non-periodic systems. Potentials developed through the atom-centered approach are simultaneously applicable for systems with various sizes. Interpolation can be enhanced by introducing custom descriptors of the local environment. We demonstrate this in the current work for Gaussian-type, bispectrum, and Zernike-type descriptors. Amp has an intuitive and modular structure with an interface through the python scripting language yet has parallelizable fortran components for demanding tasks; it is designed to integrate closely with the widely used Atomic Simulation Environment (ASE), which

  8. A Review of Machine Learning and Data Mining Approaches for Business Applications in Social Networks

    OpenAIRE

    Evis Trandafili; Marenglen Biba

    2013-01-01

    Social networks have an outstanding marketing value and developing data mining methods for viral marketing is a hot topic in the research community. However, most social networks remain impossible to be fully analyzed and understood due to prohibiting sizes and the incapability of traditional machine learning and data mining approaches to deal with the new dimension in the learning process related to the large-scale environment where the data are produced. On one hand, the birth and evolution...

  9. Reliability assessment of the fueling machine of the CANDU reactor

    International Nuclear Information System (INIS)

    Al-Kusayer, T.A.

    1985-01-01

    Fueling of CANDU-reactors is carried out by two fueling machines, each serving one end of the reactor. The fueling machine becomes a part of the primary heat transport system during the refueling operations, and hence, some refueling machine malfunctions could result in a small scale-loss-of-coolant accident. Fueling machine failures and the failure sequences are discussed. The unavailability of the fueling machine is estimated by using fault tree analysis. The probability of mechanical failure of the fueling machine interface is estimated as 1.08 x 10 -5 . (orig.) [de

  10. Machine assisted histogram classification

    Science.gov (United States)

    Benyó, B.; Gaspar, C.; Somogyi, P.

    2010-04-01

    LHCb is one of the four major experiments under completion at the Large Hadron Collider (LHC). Monitoring the quality of the acquired data is important, because it allows the verification of the detector performance. Anomalies, such as missing values or unexpected distributions can be indicators of a malfunctioning detector, resulting in poor data quality. Spotting faulty or ageing components can be either done visually using instruments, such as the LHCb Histogram Presenter, or with the help of automated tools. In order to assist detector experts in handling the vast monitoring information resulting from the sheer size of the detector, we propose a graph based clustering tool combined with machine learning algorithm and demonstrate its use by processing histograms representing 2D hitmaps events. We prove the concept by detecting ion feedback events in the LHCb experiment's RICH subdetector.

  11. Machine assisted histogram classification

    Energy Technology Data Exchange (ETDEWEB)

    Benyo, B; Somogyi, P [BME-IIT, H-1117 Budapest, Magyar tudosok koerutja 2. (Hungary); Gaspar, C, E-mail: Peter.Somogyi@cern.c [CERN-PH, CH-1211 Geneve 23 (Switzerland)

    2010-04-01

    LHCb is one of the four major experiments under completion at the Large Hadron Collider (LHC). Monitoring the quality of the acquired data is important, because it allows the verification of the detector performance. Anomalies, such as missing values or unexpected distributions can be indicators of a malfunctioning detector, resulting in poor data quality. Spotting faulty or ageing components can be either done visually using instruments, such as the LHCb Histogram Presenter, or with the help of automated tools. In order to assist detector experts in handling the vast monitoring information resulting from the sheer size of the detector, we propose a graph based clustering tool combined with machine learning algorithm and demonstrate its use by processing histograms representing 2D hitmaps events. We prove the concept by detecting ion feedback events in the LHCb experiment's RICH subdetector.

  12. Surface mining machines problems of maintenance and modernization

    CERN Document Server

    Rusiński, Eugeniusz; Moczko, Przemysław; Pietrusiak, Damian

    2017-01-01

    This unique volume imparts practical information on the operation, maintenance, and modernization of heavy performance machines such as lignite mine machines, bucket wheel excavators, and spreaders. Problems of large scale machines (mega machines) are highly specific and not well recognized in the common mechanical engineering environment. Prof. Rusiński and his co-authors identify solutions that increase the durability of these machines as well as discuss methods of failure analysis and technical condition assessment procedures. "Surface Mining Machines: Problems in Maintenance and Modernization" stands as a much-needed guidebook for engineers facing the particular challenges of heavy performance machines and offers a distinct and interesting demonstration of scale-up issues for researchers and scientists from across the fields of machine design and mechanical engineering.

  13. Effects of dimensional size and surface roughness on service performance for a micro Laval nozzle

    International Nuclear Information System (INIS)

    Cai, Yukui; Liu, Zhanqiang; Shi, Zhenyu

    2017-01-01

    Nozzles with large and small dimensions are widely used in various industries. The main objective of this research is to investigate the effects of dimensional size and surface roughness on the service performance of a micro Laval nozzle. The variation of nozzle service performance from the conventional macro to micro scale is presented in this paper. This shows that the dimensional nozzle size has a serious effect on the nozzle gas flow friction. With the decrease of nozzle size, the velocity performance and thrust performance deteriorate. The micro nozzle performance has less sensitivity to the variation of surface roughness than the large scale nozzle does. Surface quality improvement and burr prevention technologies are proposed to reduce the friction effect on the micro nozzle performance. A novel process is then developed to control and depress the burr generation during micro nozzle machining. The polymethyl-methacrylate as a coating material is coated on the rough machined surface before finish machining. Finally, the micro nozzle with a throat diameter of 1 mm is machined successfully. Thrust test results show that the implement and application of this machining process benefit the service performance improvement of the micro nozzle. (paper)

  14. The uranium machine

    International Nuclear Information System (INIS)

    Walker, M.

    1990-01-01

    The German atom bomb is a chimera. Scientists such as Carl Friedrich von Weizsaecker and Werner Heisenberg have been claiming for a long time that they refused to carry out research in the Third Reich because they did not want to put such a terrible weapon into Hitler's hand. The author produces evidence proving that the German physicists were never in a position to carry out a research project on the scale of the 'Manhattan Project', quite apart from the fact that they were lacking important technical prerequisites for splitting isotopes. With a detective's touch the author succeeds in reconstructing the competition for the bomb in minute detail. This book is the most detailed and precise analysis of the reality of that uranium machine which for four decades has haunted scientific and journalistic literature. (orig./HP) [de

  15. Machine tool structures

    CERN Document Server

    Koenigsberger, F

    1970-01-01

    Machine Tool Structures, Volume 1 deals with fundamental theories and calculation methods for machine tool structures. Experimental investigations into stiffness are discussed, along with the application of the results to the design of machine tool structures. Topics covered range from static and dynamic stiffness to chatter in metal cutting, stability in machine tools, and deformations of machine tool structures. This volume is divided into three sections and opens with a discussion on stiffness specifications and the effect of stiffness on the behavior of the machine under forced vibration c

  16. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras

    Directory of Open Access Journals (Sweden)

    Mark Kenneth Quinn

    2017-07-01

    Full Text Available Measurements of pressure-sensitive paint (PSP have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.

  17. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras.

    Science.gov (United States)

    Quinn, Mark Kenneth; Spinosa, Emanuele; Roberts, David A

    2017-07-25

    Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.

  18. Adverse drug reaction prediction using scores produced by large-scale drug-protein target docking on high-performance computing machines.

    Science.gov (United States)

    LaBute, Montiago X; Zhang, Xiaohua; Lenderman, Jason; Bennion, Brian J; Wong, Sergio E; Lightstone, Felice C

    2014-01-01

    Late-stage or post-market identification of adverse drug reactions (ADRs) is a significant public health issue and a source of major economic liability for drug development. Thus, reliable in silico screening of drug candidates for possible ADRs would be advantageous. In this work, we introduce a computational approach that predicts ADRs by combining the results of molecular docking and leverages known ADR information from DrugBank and SIDER. We employed a recently parallelized version of AutoDock Vina (VinaLC) to dock 906 small molecule drugs to a virtual panel of 409 DrugBank protein targets. L1-regularized logistic regression models were trained on the resulting docking scores of a 560 compound subset from the initial 906 compounds to predict 85 side effects, grouped into 10 ADR phenotype groups. Only 21% (87 out of 409) of the drug-protein binding features involve known targets of the drug subset, providing a significant probe of off-target effects. As a control, associations of this drug subset with the 555 annotated targets of these compounds, as reported in DrugBank, were used as features to train a separate group of models. The Vina off-target models and the DrugBank on-target models yielded comparable median area-under-the-receiver-operating-characteristic-curves (AUCs) during 10-fold cross-validation (0.60-0.69 and 0.61-0.74, respectively). Evidence was found in the PubMed literature to support several putative ADR-protein associations identified by our analysis. Among them, several associations between neoplasm-related ADRs and known tumor suppressor and tumor invasiveness marker proteins were found. A dual role for interstitial collagenase in both neoplasms and aneurysm formation was also identified. These associations all involve off-target proteins and could not have been found using available drug/on-target interaction data. This study illustrates a path forward to comprehensive ADR virtual screening that can potentially scale with increasing number

  19. Adverse drug reaction prediction using scores produced by large-scale drug-protein target docking on high-performance computing machines.

    Directory of Open Access Journals (Sweden)

    Montiago X LaBute

    Full Text Available Late-stage or post-market identification of adverse drug reactions (ADRs is a significant public health issue and a source of major economic liability for drug development. Thus, reliable in silico screening of drug candidates for possible ADRs would be advantageous. In this work, we introduce a computational approach that predicts ADRs by combining the results of molecular docking and leverages known ADR information from DrugBank and SIDER. We employed a recently parallelized version of AutoDock Vina (VinaLC to dock 906 small molecule drugs to a virtual panel of 409 DrugBank protein targets. L1-regularized logistic regression models were trained on the resulting docking scores of a 560 compound subset from the initial 906 compounds to predict 85 side effects, grouped into 10 ADR phenotype groups. Only 21% (87 out of 409 of the drug-protein binding features involve known targets of the drug subset, providing a significant probe of off-target effects. As a control, associations of this drug subset with the 555 annotated targets of these compounds, as reported in DrugBank, were used as features to train a separate group of models. The Vina off-target models and the DrugBank on-target models yielded comparable median area-under-the-receiver-operating-characteristic-curves (AUCs during 10-fold cross-validation (0.60-0.69 and 0.61-0.74, respectively. Evidence was found in the PubMed literature to support several putative ADR-protein associations identified by our analysis. Among them, several associations between neoplasm-related ADRs and known tumor suppressor and tumor invasiveness marker proteins were found. A dual role for interstitial collagenase in both neoplasms and aneurysm formation was also identified. These associations all involve off-target proteins and could not have been found using available drug/on-target interaction data. This study illustrates a path forward to comprehensive ADR virtual screening that can potentially scale with

  20. MITS machine operations

    International Nuclear Information System (INIS)

    Flinchem, J.

    1980-01-01

    This document contains procedures which apply to operations performed on individual P-1c machines in the Machine Interface Test System (MITS) at AiResearch Manufacturing Company's Torrance, California Facility

  1. Brain versus Machine Control.

    Directory of Open Access Journals (Sweden)

    Jose M Carmena

    2004-12-01

    Full Text Available Dr. Octopus, the villain of the movie "Spiderman 2", is a fusion of man and machine. Neuroscientist Jose Carmena examines the facts behind this fictional account of a brain- machine interface

  2. Applied machining technology

    CERN Document Server

    Tschätsch, Heinz

    2010-01-01

    Machining and cutting technologies are still crucial for many manufacturing processes. This reference presents all important machining processes in a comprehensive and coherent way. It includes many examples of concrete calculations, problems and solutions.

  3. Machining with abrasives

    CERN Document Server

    Jackson, Mark J

    2011-01-01

    Abrasive machining is key to obtaining the desired geometry and surface quality in manufacturing. This book discusses the fundamentals and advances in the abrasive machining processes. It provides a complete overview of developing areas in the field.

  4. Machine medical ethics

    CERN Document Server

    Pontier, Matthijs

    2015-01-01

    The essays in this book, written by researchers from both humanities and sciences, describe various theoretical and experimental approaches to adding medical ethics to a machine in medical settings. Medical machines are in close proximity with human beings, and getting closer: with patients who are in vulnerable states of health, who have disabilities of various kinds, with the very young or very old, and with medical professionals. In such contexts, machines are undertaking important medical tasks that require emotional sensitivity, knowledge of medical codes, human dignity, and privacy. As machine technology advances, ethical concerns become more urgent: should medical machines be programmed to follow a code of medical ethics? What theory or theories should constrain medical machine conduct? What design features are required? Should machines share responsibility with humans for the ethical consequences of medical actions? How ought clinical relationships involving machines to be modeled? Is a capacity for e...

  5. Improved Saturated Hydraulic Conductivity Pedotransfer Functions Using Machine Learning Methods

    Science.gov (United States)

    Araya, S. N.; Ghezzehei, T. A.

    2017-12-01

    Saturated hydraulic conductivity (Ks) is one of the fundamental hydraulic properties of soils. Its measurement, however, is cumbersome and instead pedotransfer functions (PTFs) are often used to estimate it. Despite a lot of progress over the years, generic PTFs that estimate hydraulic conductivity generally don't have a good performance. We develop significantly improved PTFs by applying state of the art machine learning techniques coupled with high-performance computing on a large database of over 20,000 soils—USKSAT and the Florida Soil Characterization databases. We compared the performance of four machine learning algorithms (k-nearest neighbors, gradient boosted model, support vector machine, and relevance vector machine) and evaluated the relative importance of several soil properties in explaining Ks. An attempt is also made to better account for soil structural properties; we evaluated the importance of variables derived from transformations of soil water retention characteristics and other soil properties. The gradient boosted models gave the best performance with root mean square errors less than 0.7 and mean errors in the order of 0.01 on a log scale of Ks [cm/h]. The effective particle size, D10, was found to be the single most important predictor. Other important predictors included percent clay, bulk density, organic carbon percent, coefficient of uniformity and values derived from water retention characteristics. Model performances were consistently better for Ks values greater than 10 cm/h. This study maximizes the extraction of information from a large database to develop generic machine learning based PTFs to estimate Ks. The study also evaluates the importance of various soil properties and their transformations in explaining Ks.

  6. Machine protection systems

    CERN Document Server

    Macpherson, A L

    2010-01-01

    A summary of the Machine Protection System of the LHC is given, with particular attention given to the outstanding issues to be addressed, rather than the successes of the machine protection system from the 2009 run. In particular, the issues of Safe Machine Parameter system, collimation and beam cleaning, the beam dump system and abort gap cleaning, injection and dump protection, and the overall machine protection program for the upcoming run are summarised.

  7. Dictionary of machine terms

    International Nuclear Information System (INIS)

    1990-06-01

    This book has introduction of dictionary of machine terms, and a compilation committee and introductory remarks. It gives descriptions of the machine terms in alphabetical order from a to Z and also includes abbreviation of machine terms and symbol table, way to read mathematical symbols and abbreviation and terms of drawings.

  8. Mankind, machines and people

    Energy Technology Data Exchange (ETDEWEB)

    Hugli, A

    1984-01-01

    The following questions are addressed: is there a difference between machines and men, between human communication and communication with machines. Will we ever reach the point when the dream of artificial intelligence becomes a reality. Will thinking machines be able to replace the human spirit in all its aspects. Social consequences and philosophical aspects are addressed. 8 references.

  9. A Universal Reactive Machine

    DEFF Research Database (Denmark)

    Andersen, Henrik Reif; Mørk, Simon; Sørensen, Morten U.

    1997-01-01

    Turing showed the existence of a model universal for the set of Turing machines in the sense that given an encoding of any Turing machine asinput the universal Turing machine simulates it. We introduce the concept of universality for reactive systems and construct a CCS processuniversal...

  10. HTS machine laboratory prototype

    DEFF Research Database (Denmark)

    machine. The machine comprises six stationary HTS field windings wound from both YBCO and BiSCOO tape operated at liquid nitrogen temperature and enclosed in a cryostat, and a three phase armature winding spinning at up to 300 rpm. This design has full functionality of HTS synchronous machines. The design...

  11. Your Sewing Machine.

    Science.gov (United States)

    Peacock, Marion E.

    The programed instruction manual is designed to aid the student in learning the parts, uses, and operation of the sewing machine. Drawings of sewing machine parts are presented, and space is provided for the student's written responses. Following an introductory section identifying sewing machine parts, the manual deals with each part and its…

  12. How large-scale technological development should be in the future. Survey and research on highly automated machines; Kongo no daikibo gijutsu kaihatsu no hoko ni tsuite. Kodo jidoka kikai ni kansuru chosa kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1982-03-01

    A survey is conducted about highly automated machines such as industrial robots. The task to be subjected to development as derived from a survey conducted about needs is the construction of a dangerous work robot. It is pointed out that work in coal mines, tall buildings, industrial complexes, or nuclear power plants may encounter large-scale accidents, and the task is how to perform such work in an automated way. The tasks concluded to be subjected to development after a seed survey analysis are categorized into three groups of element technologies, namely, sensors and recognition function, mechanism and materials, and control and data processing. These element technologies are to be ultimately integrated into a robot, for critical work which is a combination of a highly intelligent robot main body and an integrated management system. Since it will happen that humans have to directly operate such a robot under delicate conditions and share the burden of judgement and thinking, it is also necessary to develop technologies to solve problems of man-to-robot engineering. It is proposed that a dangerous work robot research and development program be established before development is started. (NEDO)

  13. Real-time wavelet-based inline banknote-in-bundle counting for cut-and-bundle machines

    Science.gov (United States)

    Petker, Denis; Lohweg, Volker; Gillich, Eugen; Türke, Thomas; Willeke, Harald; Lochmüller, Jens; Schaede, Johannes

    2011-03-01

    Automatic banknote sheet cut-and-bundle machines are widely used within the scope of banknote production. Beside the cutting-and-bundling, which is a mature technology, image-processing-based quality inspection for this type of machine is attractive. We present in this work a new real-time Touchless Counting and perspective cutting blade quality insurance system, based on a Color-CCD-Camera and a dual-core Computer, for cut-and-bundle applications in banknote production. The system, which applies Wavelet-based multi-scale filtering is able to count banknotes inside a 100-bundle within 200-300 ms depending on the window size.

  14. Quantum machine learning.

    Science.gov (United States)

    Biamonte, Jacob; Wittek, Peter; Pancotti, Nicola; Rebentrost, Patrick; Wiebe, Nathan; Lloyd, Seth

    2017-09-13

    Fuelled by increasing computer power and algorithmic advances, machine learning techniques have become powerful tools for finding patterns in data. Quantum systems produce atypical patterns that classical systems are thought not to produce efficiently, so it is reasonable to postulate that quantum computers may outperform classical computers on machine learning tasks. The field of quantum machine learning explores how to devise and implement quantum software that could enable machine learning that is faster than that of classical computers. Recent work has produced quantum algorithms that could act as the building blocks of machine learning programs, but the hardware and software challenges are still considerable.

  15. Asynchronized synchronous machines

    CERN Document Server

    Botvinnik, M M

    1964-01-01

    Asynchronized Synchronous Machines focuses on the theoretical research on asynchronized synchronous (AS) machines, which are "hybrids” of synchronous and induction machines that can operate with slip. Topics covered in this book include the initial equations; vector diagram of an AS machine; regulation in cases of deviation from the law of full compensation; parameters of the excitation system; and schematic diagram of an excitation regulator. The possible applications of AS machines and its calculations in certain cases are also discussed. This publication is beneficial for students and indiv

  16. The correlation of nitrite concentration with lesion size in initial phase of stroke; It is not correlated with National Institute Health Stroke Scale

    Directory of Open Access Journals (Sweden)

    Mehdi Nematbakhsh

    2008-06-01

    Full Text Available

    • BACKGROUND: The role of Nitric Oxide (NO and its metabolites in stroke has been examined clinically and experimentally. The relationship between plasma NO level and Lesion Size (LS or clinical severity of stroke is still under investigation. In this clinical study, the serum level of Nitrite (NI; the last metabolite of NO was measured in first and fifth days of onset of the stroke, and its correlation with LS was determined.
    • METHOD: 37 Cerebrovascular Attack (CVA patients were considered. The National Institute Health Stroke Scale (NIHSS was assessed to determined neurological impairment within 24 hours of onset. On the basis of NIHSS, the patients were divided into mild, moderate and severe groups. CT Scan for all patients were obtained in the first day, and based on CT Scan results, the patients were also divided into hemorrhagic, ischemic and normal groups. The serum level of NI and the LS were determined.
    • RESULTS: The mean serum levels of NI in 37 patients in the first and fifth days of stroke were 8.43± 1.23 and 7.46±0.72 7mole/liter with no significance difference. The analyses of data indicated no significant correlation between NI concentration and NIHSS, but in patients with abnormal CT Scan, statistical correlation was existed between NI concentration and LS (r=0.521, p=0.022.
    • CONCLUSION: The NI concentration is not correlated with NIHSS, but it is correlated with LS. The sources of NO metabolite sources are different; neuronal, endothelial or inducible. Therefore the concentration of NO or NI is not exactly the endothelial NO reprehensive which is beneficial in stroke, and it seems that the relationship between NO precursor subtypes and NIHSS or LS is needed to investigate.
    • KEYWORDS: Nitric Oxide, Stroke

  17. Scikit-learn: Machine Learning in Python

    OpenAIRE

    Pedregosa, Fabian; Varoquaux, Gaël; Gramfort, Alexandre; Michel, Vincent; Thirion, Bertrand; Grisel, Olivier; Blondel, Mathieu; Prettenhofer, Peter; Weiss, Ron; Dubourg, Vincent; Vanderplas, Jake; Passos, Alexandre; Cournapeau, David; Brucher, Matthieu; Perrot, Matthieu

    2011-01-01

    International audience; Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic ...

  18. Scikit-learn: Machine Learning in Python

    OpenAIRE

    Pedregosa, Fabian; Varoquaux, Gaël; Gramfort, Alexandre; Michel, Vincent; Thirion, Bertrand; Grisel, Olivier; Blondel, Mathieu; Louppe, Gilles; Prettenhofer, Peter; Weiss, Ron; Dubourg, Vincent; Vanderplas, Jake; Passos, Alexandre; Cournapeau, David; Brucher, Matthieu

    2012-01-01

    Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic and commercial settings....

  19. Axial flux permanent magnet brushless machines

    CERN Document Server

    Gieras, Jacek F; Kamper, Maarten J

    2008-01-01

    Axial Flux Permanent Magnet (AFPM) brushless machines are modern electrical machines with a lot of advantages over their conventional counterparts. They are being increasingly used in consumer electronics, public life, instrumentation and automation system, clinical engineering, industrial electromechanical drives, automobile manufacturing industry, electric and hybrid electric vehicles, marine vessels and toys. They are also used in more electric aircrafts and many other applications on larger scale. New applications have also emerged in distributed generation systems (wind turbine generators

  20. Molecular machines open cell membranes.

    Science.gov (United States)

    García-López, Víctor; Chen, Fang; Nilewski, Lizanne G; Duret, Guillaume; Aliyan, Amir; Kolomeisky, Anatoly B; Robinson, Jacob T; Wang, Gufeng; Pal, Robert; Tour, James M

    2017-08-30

    Beyond the more common chemical delivery strategies, several physical techniques are used to open the lipid bilayers of cellular membranes. These include using electric and magnetic fields, temperature, ultrasound or light to introduce compounds into cells, to release molecular species from cells or to selectively induce programmed cell death (apoptosis) or uncontrolled cell death (necrosis). More recently, molecular motors and switches that can change their conformation in a controlled manner in response to external stimuli have been used to produce mechanical actions on tissue for biomedical applications. Here we show that molecular machines can drill through cellular bilayers using their molecular-scale actuation, specifically nanomechanical action. Upon physical adsorption of the molecular motors onto lipid bilayers and subsequent activation of the motors using ultraviolet light, holes are drilled in the cell membranes. We designed molecular motors and complementary experimental protocols that use nanomechanical action to induce the diffusion of chemical species out of synthetic vesicles, to enhance the diffusion of traceable molecular machines into and within live cells, to induce necrosis and to introduce chemical species into live cells. We also show that, by using molecular machines that bear short peptide addends, nanomechanical action can selectively target specific cell-surface recognition sites. Beyond the in vitro applications demonstrated here, we expect that molecular machines could also be used in vivo, especially as their design progresses to allow two-photon, near-infrared and radio-frequency activation.

  1. Molecular machines open cell membranes

    Science.gov (United States)

    García-López, Víctor; Chen, Fang; Nilewski, Lizanne G.; Duret, Guillaume; Aliyan, Amir; Kolomeisky, Anatoly B.; Robinson, Jacob T.; Wang, Gufeng; Pal, Robert; Tour, James M.

    2017-08-01

    Beyond the more common chemical delivery strategies, several physical techniques are used to open the lipid bilayers of cellular membranes. These include using electric and magnetic fields, temperature, ultrasound or light to introduce compounds into cells, to release molecular species from cells or to selectively induce programmed cell death (apoptosis) or uncontrolled cell death (necrosis). More recently, molecular motors and switches that can change their conformation in a controlled manner in response to external stimuli have been used to produce mechanical actions on tissue for biomedical applications. Here we show that molecular machines can drill through cellular bilayers using their molecular-scale actuation, specifically nanomechanical action. Upon physical adsorption of the molecular motors onto lipid bilayers and subsequent activation of the motors using ultraviolet light, holes are drilled in the cell membranes. We designed molecular motors and complementary experimental protocols that use nanomechanical action to induce the diffusion of chemical species out of synthetic vesicles, to enhance the diffusion of traceable molecular machines into and within live cells, to induce necrosis and to introduce chemical species into live cells. We also show that, by using molecular machines that bear short peptide addends, nanomechanical action can selectively target specific cell-surface recognition sites. Beyond the in vitro applications demonstrated here, we expect that molecular machines could also be used in vivo, especially as their design progresses to allow two-photon, near-infrared and radio-frequency activation.

  2. Mid-size urbanism

    NARCIS (Netherlands)

    Zwart, de B.A.M.

    2013-01-01

    To speak of the project for the mid-size city is to speculate about the possibility of mid-size urbanity as a design category. An urbanism not necessarily defined by the scale of the intervention or the size of the city undergoing transformation, but by the framing of the issues at hand and the

  3. Size effect studies on smooth tensile specimens at room temperature and 400 oC

    International Nuclear Information System (INIS)

    Krompholz, K.; Kamber, J.; Groth, E.; Kalkhof, D.

    2000-06-01

    One of the objectives of the REVISA project (REactor Vessel Integrity in Severe Accidents) is to assess the size effect related to deformation and failure models as well as material data under quasistatic and dynamic conditions in homogeneous and non-homogeneous states of strain. For these investigations the reactor pressure vessel material 20 MnMoNi 55 was selected. It was subjected to a size effect study on smooth scaled tensile specimens of three sizes. Two strain rates (2*10 -5 /s and 10 -3 /s) and two temperatures (room temperature and 400 o C) were selected. The investigations are aimed at a support for a gradient plasticity approach to size effects. Test on the small specimens (diameters 3 and 9 mm) were performed at an electromechanical test machine, while the large specimens (diameter 30 mm) had to be tested at a servohydraulical closed loop test machine with a force capacity of 1000 kN

  4. Size effect studies on smooth tensile specimens at room temperature and 400 {sup o}C

    Energy Technology Data Exchange (ETDEWEB)

    Krompholz, K.; Kamber, J.; Groth, E.; Kalkhof, D

    2000-06-15

    One of the objectives of the REVISA project (REactor Vessel Integrity in Severe Accidents) is to assess the size effect related to deformation and failure models as well as material data under quasistatic and dynamic conditions in homogeneous and non-homogeneous states of strain. For these investigations the reactor pressure vessel material 20 MnMoNi 55 was selected. It was subjected to a size effect study on smooth scaled tensile specimens of three sizes. Two strain rates (2*10{sup -5}/s and 10{sup -3}/s) and two temperatures (room temperature and 400 {sup o}C) were selected. The investigations are aimed at a support for a gradient plasticity approach to size effects. Test on the small specimens (diameters 3 and 9 mm) were performed at an electromechanical test machine, while the large specimens (diameter 30 mm) had to be tested at a servohydraulical closed loop test machine with a force capacity of 1000 kN.

  5. Dynamic cellular manufacturing system considering machine failure and workload balance

    Science.gov (United States)

    Rabbani, Masoud; Farrokhi-Asl, Hamed; Ravanbakhsh, Mohammad

    2018-02-01

    Machines are a key element in the production system and their failure causes irreparable effects in terms of cost and time. In this paper, a new multi-objective mathematical model for dynamic cellular manufacturing system (DCMS) is provided with consideration of machine reliability and alternative process routes. In this dynamic model, we attempt to resolve the problem of integrated family (part/machine cell) formation as well as the operators' assignment to the cells. The first objective minimizes the costs associated with the DCMS. The second objective optimizes the labor utilization and, finally, a minimum value of the variance of workload between different cells is obtained by the third objective function. Due to the NP-hard nature of the cellular manufacturing problem, the problem is initially validated by the GAMS software in small-sized problems, and then the model is solved by two well-known meta-heuristic methods including non-dominated sorting genetic algorithm and multi-objective particle swarm optimization in large-scaled problems. Finally, the results of the two algorithms are compared with respect to five different comparison metrics.

  6. Scaling Thomson scattering to big machines

    Czech Academy of Sciences Publication Activity Database

    Bílková, Petra; Walsh, M.; Böhm, Petr; Bassan, M.; Aftanas, Milan; Pánek, Radomír

    2016-01-01

    Roč. 11, č. 3 (2016), č. článku C03302. ISSN 1748-0221. [International Symposium on Laser-Aided Plasma Diagnostics/17./. Sapporo, 27.09.2015-01.10.2015] Institutional support: RVO:61389021 Keywords : Spectrometers * Nuclear instruments and methods for hot plasma diagnostics * Plasma diagnostics - interferometry * Spectroscopy and imaging Subject RIV: BL - Plasma and Gas Discharge Physics OBOR OECD: 2.11 Other engineering and technologies Impact factor: 1.220, year: 2016 http://iopscience.iop.org/article/10.1088/1748-0221/11/03/C03023/pdf

  7. Experimental determination of the dimensionless scaling parameter of energy transport in tokamaks

    International Nuclear Information System (INIS)

    Luce, T.C.; Petty, C.C.

    1995-07-01

    Controlled fusion experiments have focused on the variation of the plasma characteristics as the engineering or control parameters are systematically changed. This has led to the development of extrapolation formulae for prediction of future device performance using these same variables as a basis. Recently, it was noticed that present-day tokamaks can operate with all of the dimensionless variables which appear in the Vlasov-Maxwell system of equations at values projected for a fusion powerplant with the exception of the parameter ρ * , the gyroradius normalized to the machine size. The scaling with this parameter is related to the benefit of increasing the size of the machine either directly or effectively by increasing the magnetic field. It is exactly this scaling which is subject to systematic error in the inter-machine databases and the cost driver for any future machine. If this scaling can be fixed by a series of single machine experiments, much as the current and power scalings have been, the confidence in the prediction of future device performance would be greatly enhanced. While carrying out experiments of this type, it was also found that the ρ * scaling can illuminate the underlying physics of energy transport. Conclusions drawn from experiments on the DIII-D tokamak in these two areas are the subject of this paper

  8. HUMAN MACHINE COOPERATIVE TELEROBOTICS

    International Nuclear Information System (INIS)

    William R. Hamel; Spivey Douglass; Sewoong Kim; Pamela Murray; Yang Shou; Sriram Sridharan; Ge Zhang; Scott Thayer; Rajiv V. Dubey

    2003-01-01

    research described as Human Machine Cooperative Telerobotics (HMCTR). The HMCTR combines the telerobot with robotic control techniques to improve the system efficiency and reliability in teleoperation mode. In this topical report, the control strategy, configuration and experimental results of Human Machines Cooperative Telerobotics (HMCTR), which modifies and limits the commands of human operator to follow the predefined constraints in the teleoperation mode, is described. The current implementation is a laboratory-scale system that will be incorporated into an engineering-scale system at the Oak Ridge National Laboratory in the future

  9. HUMAN MACHINE COOPERATIVE TELEROBOTICS

    Energy Technology Data Exchange (ETDEWEB)

    William R. Hamel; Spivey Douglass; Sewoong Kim; Pamela Murray; Yang Shou; Sriram Sridharan; Ge Zhang; Scott Thayer; Rajiv V. Dubey

    2003-06-30

    described as Human Machine Cooperative Telerobotics (HMCTR). The HMCTR combines the telerobot with robotic control techniques to improve the system efficiency and reliability in teleoperation mode. In this topical report, the control strategy, configuration and experimental results of Human Machines Cooperative Telerobotics (HMCTR), which modifies and limits the commands of human operator to follow the predefined constraints in the teleoperation mode, is described. The current implementation is a laboratory-scale system that will be incorporated into an engineering-scale system at the Oak Ridge National Laboratory in the future.

  10. Vending machine assessment methodology. A systematic review.

    Science.gov (United States)

    Matthews, Melissa A; Horacek, Tanya M

    2015-07-01

    The nutritional quality of food and beverage products sold in vending machines has been implicated as a contributing factor to the development of an obesogenic food environment. How comprehensive, reliable, and valid are the current assessment tools for vending machines to support or refute these claims? A systematic review was conducted to summarize, compare, and evaluate the current methodologies and available tools for vending machine assessment. A total of 24 relevant research studies published between 1981 and 2013 met inclusion criteria for this review. The methodological variables reviewed in this study include assessment tool type, study location, machine accessibility, product availability, healthfulness criteria, portion size, price, product promotion, and quality of scientific practice. There were wide variations in the depth of the assessment methodologies and product healthfulness criteria utilized among the reviewed studies. Of the reviewed studies, 39% evaluated machine accessibility, 91% evaluated product availability, 96% established healthfulness criteria, 70% evaluated portion size, 48% evaluated price, 52% evaluated product promotion, and 22% evaluated the quality of scientific practice. Of all reviewed articles, 87% reached conclusions that provided insight into the healthfulness of vended products and/or vending environment. Product healthfulness criteria and complexity for snack and beverage products was also found to be variable between the reviewed studies. These findings make it difficult to compare results between studies. A universal, valid, and reliable vending machine assessment tool that is comprehensive yet user-friendly is recommended. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. The Influence of Age and Sex on Genetic Associations with Adult Body Size and Shape: A Large-Scale Genome-Wide Interaction Study

    NARCIS (Netherlands)

    T.W. Winkler (Thomas W.); A.E. Justice (Anne); M.J. Graff (Maud J.L.); Barata, L. (Llilda); M.F. Feitosa (Mary Furlan); Chu, S. (Su); J. Czajkowski (Jacek); T. Esko (Tõnu); M. Fall (Magnus); T.O. Kilpeläinen (Tuomas); Y. Lu (Yingchang); R. Mägi (Reedik); E. Mihailov (Evelin); T.H. Pers (Tune); Rüeger, S. (Sina); A. Teumer (Alexander); G.B. Ehret (Georg); T. Ferreira (Teresa); N.L. Heard-Costa (Nancy); J. Karjalainen (Juha); V. Lagou (Vasiliki); A. Mahajan (Anubha); Neinast, M.D. (Michael D.); I. Prokopenko (Inga); J. Simino (Jeannette); T.M. Teslovich (Tanya M.); R. Jansen; H.J. Westra (Harm-Jan); C.C. White (Charles); D. Absher (Devin); T.S. Ahluwalia (Tarunveer Singh); S. Ahmad (Shafqat); E. Albrecht (Eva); A.C. Alves (Alexessander Couto); Bragg-Gresham, J.L. (Jennifer L.); A.J. de Craen (Anton); J.C. Bis (Joshua); A. Bonnefond (Amélie); G. Boucher (Gabrielle); G. Cadby (Gemma); Y.-C. Cheng (Yu-Ching); Chiang, C.W. (Charleston W K); G. Delgado; A. Demirkan (Ayşe); N. Dueker (Nicole); N. Eklund (Niina); G. Eiriksdottir (Gudny); J. Eriksson (Joel); B. Feenstra (Bjarke); K. Fischer (Krista); F. Frau (Francesca); T.E. Galesloot (Tessel); F. Geller (Frank); A. Goel (Anuj); M. Gorski (Mathias); T.B. Grammer (Tanja); S. Gustafsson (Stefan); Haitjema, S. (Saskia); J.J. Hottenga (Jouke Jan); J.E. Huffman (Jennifer); A.U. Jackson (Anne); K.B. Jacobs (Kevin); A. Johansson (Åsa); M. Kaakinen (Marika); M.E. Kleber (Marcus); J. Lahti (Jari); I.M. Leach (Irene Mateo); Lehne, B. (Benjamin); Liu, Y. (Youfang); K.S. Lo; M. Lorentzon (Mattias); J. Luan (Jian'An); P.A. Madden (Pamela); M. Mangino (Massimo); B. McKnight (Barbara); Medina-Gomez, C. (Carolina); K.L. Monda (Keri); M.E. Montasser (May E.); G. Müller (Gabriele); M. Müller-Nurasyid (Martina); I.M. Nolte (Ilja); Panoutsopoulou, K. (Kalliope); L. Pascoe (Laura); L. Paternoster (Lavinia); N.W. Rayner (Nigel William); F. Renström (Frida); Rizzi, F. (Federica); L.M. Rose (Lynda); Ryan, K.A. (Kathy A.); P. Salo (Perttu); S. Sanna (Serena); H. Scharnagl (Hubert); Shi, J. (Jianxin); A.V. Smith (Albert Vernon); L. Southam (Lorraine); A. Stancáková (Alena); V. Steinthorsdottir (Valgerdur); R.J. Strawbridge (Rona); Sung, Y.J. (Yun Ju); I. Tachmazidou (Ioanna); T. Tanaka (Toshiko); G. Thorleifsson (Gudmar); S. Trompet (Stella); N. Pervjakova (Natalia); J.P. Tyrer (Jonathan); L. Vandenput (Liesbeth); S.W. Van Der Laan (Sander W.); N. van der Velde (Nathalie); J. van Setten (Jessica); J.V. van Vliet-Ostaptchouk (Jana); N. Verweij (Niek); E. Vlachopoulou (Efthymia); L. Waite (Lindsay); S.R. Wang (Sophie); Z. Wang (Zhaoming); S.H. Wild (Sarah); C. Willenborg (Christina); J.F. Wilson (James); A. Wong (Andrew); Yang, J. (Jian); L. Yengo (Loic); L.M. Yerges-Armstrong (Laura); Yu, L. (Lei); W. Zhang (Weihua); Zhao, J.H. (Jing Hua); E.A. Andersson (Ehm Astrid); S.J.L. Bakker (Stephan); D. Baldassarre (Damiano); Banasik, K. (Karina); Barcella, M. (Matteo); Barlassina, C. (Cristina); C. Bellis (Claire); P. Benaglio (Paola); J. Blangero (John); M. Blüher (Matthias); Bonnet, F. (Fabrice); L.L. Bonnycastle (Lori); H.A. Boyd (Heather); M. Bruinenberg (M.); Buchman, A.S. (Aron S.); H. Campbell (Harry); Y.D. Chen (Y.); P.S. Chines (Peter); S. Claudi-Boehm (Simone); J.W. Cole (John W.); F.S. Collins (Francis); E.J.C. de Geus (Eco); L.C.P.G.M. de Groot (Lisette); M. Dimitriou (Maria); J. Duan (Jubao); S. Enroth (Stefan); E. Eury (Elodie); A.-E. Farmaki (Aliki-Eleni); N.G. Forouhi (Nita); N. Friedrich (Nele); P.V. Gejman (Pablo); B. Gigante (Bruna); N. Glorioso (Nicola); A. Go (Attie); R.F. Gottesman (Rebecca); J. Gräßler (Jürgen); H. Grallert (Harald); N. Grarup (Niels); Gu, Y.-M. (Yu-Mei); L. Broer (Linda); A.C. Ham (Annelies); T. Hansen (T.); T.B. Harris (Tamara); C.A. Hartman (Catharina A.); Hassinen, M. (Maija); N. Hastie (Nick); A.T. Hattersley (Andrew); A.C. Heath (Andrew); A.K. Henders (Anjali); D.G. Hernandez (Dena); H.L. Hillege (Hans); O.L. Holmen (Oddgeir); G.K. Hovingh (Kees); J. Hui (Jennie); Husemoen, L.L. (Lise L.); Hutri-Kähönen, N. (Nina); P.G. Hysi (Pirro); T. Illig (Thomas); P.L. de Jager (Philip); S. Jalilzadeh (Shapour); T. Jorgensen (Torben); J.W. Jukema (Jan Wouter); Juonala, M. (Markus); S. Kanoni (Stavroula); M. Karaleftheri (Maria); K.T. Khaw; L. Kinnunen (Leena); T. Kittner (Thomas); W. Koenig (Wolfgang); I. Kolcic (Ivana); P. Kovacs (Peter); Krarup, N.T. (Nikolaj T.); W. Kratzer (Wolfgang); Krüger, J. (Janine); Kuh, D. (Diana); M. Kumari (Meena); T. Kyriakou (Theodosios); C. Langenberg (Claudia); L. Lannfelt (Lars); C. Lanzani (Chiara); V. Lotay (Vaneet); L.J. Launer (Lenore); K. Leander (Karin); J. Lindström (Jaana); A. Linneberg (Allan); Liu, Y.-P. (Yan-Ping); S. Lobbens (Stéphane); R.N. Luben (Robert); V. Lyssenko (Valeriya); S. Männistö (Satu); P.K. Magnusson (Patrik); W.L. McArdle (Wendy); C. Menni (Cristina); S. Merger (Sigrun); L. Milani (Lili); Montgomery, G.W. (Grant W.); A.P. Morris (Andrew); N. Narisu (Narisu); M. Nelis (Mari); K.K. Ong (Ken); A. Palotie (Aarno); L. Perusse (Louis); I. Pichler (Irene); M.G. Pilia (Maria Grazia); A. Pouta (Anneli); Rheinberger, M. (Myriam); Ribel-Madsen, R. (Rasmus); Richards, M. (Marcus); K.M. Rice (Kenneth); T.K. Rice (Treva K.); C. Rivolta (Carlo); V. Salomaa (Veikko); A.R. Sanders (Alan); M.A. Sarzynski (Mark A.); S. Scholtens (Salome); R.A. Scott (Robert); W.R. Scott (William R.); S. Sebert (Sylvain); S. Sengupta (Sebanti); B. Sennblad (Bengt); T. Seufferlein (Thomas); A. Silveira (Angela); P.E. Slagboom (Eline); J.H. Smit (Jan); T. Sparsø (Thomas); K. Stirrups (Kathy); R.P. Stolk (Ronald); H.M. Stringham (Heather); Swertz, M.A. (Morris A.); A.J. Swift (Amy); A.C. Syvänen; S.-T. Tan (Sian-Tsung); B. Thorand (Barbara); A. Tönjes (Anke); Tremblay, A. (Angelo); E. Tsafantakis (Emmanouil); P.J. van der Most (Peter); U. Völker (Uwe); M.-C. Vohl (Marie-Claude); J.M. Vonk (Judith); M. Waldenberger (Melanie); Walker, R.W. (Ryan W.); R. Wennauer (Roman); E. Widen; G.A.H.M. Willemsen (Gonneke); T. Wilsgaard (Tom); A.F. Wright (Alan); M.C. Zillikens (Carola); S. Van Dijk (Suzanne); N.M. van Schoor (Natasja); F.W. Asselbergs (Folkert); P.I.W. de Bakker (Paul); J.S. Beckmann (Jacques); J.P. Beilby (John); D.A. Bennett (David A.); R.N. Bergman (Richard); S.M. Bergmann (Sven); C.A. Böger (Carsten); B.O. Boehm (Bernhard); E.A. Boerwinkle (Eric); D.I. Boomsma (Dorret); S.R. Bornstein (Stefan); E.P. Bottinger (Erwin); C. Bouchard (Claude); J.C. Chambers (John); S.J. Chanock (Stephen); D.I. Chasman (Daniel); F. Cucca (Francesco); D. Cusi (Daniele); G.V. Dedoussis (George); J. Erdmann (Jeanette); K. Hagen (Knut); D. Evans; U. de Faire (Ulf); M. Farrall (Martin); L. Ferrucci (Luigi); I. Ford (Ian); L. Franke (Lude); P.W. Franks (Paul); P. Froguel (Philippe); R.T. Gansevoort (Ron); C. Gieger (Christian); H. Grönberg (Henrik); V. Gudnason (Vilmundur); U. Gyllensten (Ulf); P. Hall (Per); A. Hamsten (Anders); P. van der Harst (Pim); C. Hayward (Caroline); M. Heliovaara (Markku); C. Hengstenberg (Christian); A.A. Hicks (Andrew); A. Hingorani (Aroon); A. Hofman (Albert); Hu, F. (Frank); H.V. Huikuri (Heikki); K. Hveem (Kristian); A. James (Alan); Jordan, J.M. (Joanne M.); A. Jula (Antti); M. Kähönen (Mika); E. Kajantie (Eero); S. Kathiresan (Sekar); L.A.L.M. Kiemeney (Bart); M. Kivimaki (Mika); P. Knekt; H. Koistinen (Heikki); J.S. Kooner (Jaspal S.); S. Koskinen (Seppo); J. Kuusisto (Johanna); W. Maerz (Winfried); N.G. Martin (Nicholas); M. Laakso (Markku); T.A. Lakka (Timo); T. Lehtimäki (Terho); G. Lettre (Guillaume); D.F. Levinson (Douglas); W.H.L. Kao (Wen); M.L. Lokki; Mäntyselkä, P. (Pekka); M. Melbye (Mads); A. Metspalu (Andres); B.D. Mitchell (Braxton); F.L. Moll (Frans); J.C. Murray (Jeffrey); A.W. Musk (Arthur); M.S. Nieminen (Markku); I. Njølstad (Inger); C. Ohlsson (Claes); A.J. Oldehinkel (Albertine); B.A. Oostra (Ben); C. Palmer (Cameron); J.S. Pankow (James); G. Pasterkamp (Gerard); N.L. Pedersen (Nancy); O. Pedersen (Oluf); B.W.J.H. Penninx (Brenda); M. Perola (Markus); A. Peters (Annette); O. Polasek (Ozren); P.P. Pramstaller (Peter Paul); Psaty, B.M. (Bruce M.); Qi, L. (Lu); T. Quertermous (Thomas); Raitakari, O.T. (Olli T.); T. Rankinen (Tuomo); R. Rauramaa (Rainer); P.M. Ridker (Paul); J.D. Rioux (John); F. Rivadeneira Ramirez (Fernando); J.I. Rotter (Jerome I.); I. Rudan (Igor); H.M. den Ruijter (Hester ); J. Saltevo (Juha); N. Sattar (Naveed); Schunkert, H. (Heribert); P.E.H. Schwarz (Peter); A.R. Shuldiner (Alan); J. Sinisalo (Juha); H. Snieder (Harold); T.I.A. Sørensen (Thorkild); T.D. Spector (Timothy); Staessen, J.A. (Jan A.); Stefania, B. (Bandinelli); U. Thorsteinsdottir (Unnur); M. Stumvoll (Michael); J.-C. Tardif (Jean-Claude); E. Tremoli (Elena); J. Tuomilehto (Jaakko); A.G. Uitterlinden (André); M. Uusitupa (Matti); A.L.M. Verbeek; S.H.H.M. Vermeulen (Sita); J. Viikari (Jorma); Vitart, V. (Veronique); H. Völzke (Henry); P. Vollenweider (Peter); G. Waeber (Gérard); M. Walker (Mark); H. Wallaschofski (Henri); N.J. Wareham (Nick); H. Watkins (Hugh); E. Zeggini (Eleftheria); A. Chakravarti (Aravinda); Clegg, D.J. (Deborah J.); L.A. Cupples (Adrienne); P. Gordon-Larsen (Penny); C.E. Jaquish (Cashell); D.C. Rao (Dabeeru C.); Abecasis, G.R. (Goncalo R.); T.L. Assimes (Themistocles); I.E. Barroso (Inês); S.I. Berndt (Sonja); M. Boehnke (Michael); P. Deloukas (Panagiotis); C.S. Fox (Caroline); L. Groop (Leif); D. Hunter (David); E. Ingelsson (Erik); R.C. Kaplan (Robert); McCarthy, M.I. (Mark I.); K.L. Mohlke (Karen); J.R. O´Connell; Schlessinger, D. (David); D.P. Strachan (David); J-A. Zwart (John-Anker); C.M. van Duijn (Cornelia); J.N. Hirschhorn (Joel); C.M. Lindgren (Cecilia M.); I.M. Heid (Iris); K.E. North (Kari); I.B. Borecki (Ingrid); Z. Kutalik (Zoltán); R.J.F. Loos (Ruth)

    2015-01-01

    textabstractGenome-wide association studies (GWAS) have identified more than 100 genetic variants contributing to BMI, a measure of body size, or waist-to-hip ratio (adjusted for BMI, WHRadjBMI), a measure of body shape. Body size and shape change as people grow older and these changes differ

  12. Virtual Machine Language

    Science.gov (United States)

    Grasso, Christopher; Page, Dennis; O'Reilly, Taifun; Fteichert, Ralph; Lock, Patricia; Lin, Imin; Naviaux, Keith; Sisino, John

    2005-01-01

    Virtual Machine Language (VML) is a mission-independent, reusable software system for programming for spacecraft operations. Features of VML include a rich set of data types, named functions, parameters, IF and WHILE control structures, polymorphism, and on-the-fly creation of spacecraft commands from calculated values. Spacecraft functions can be abstracted into named blocks that reside in files aboard the spacecraft. These named blocks accept parameters and execute in a repeatable fashion. The sizes of uplink products are minimized by the ability to call blocks that implement most of the command steps. This block approach also enables some autonomous operations aboard the spacecraft, such as aerobraking, telemetry conditional monitoring, and anomaly response, without developing autonomous flight software. Operators on the ground write blocks and command sequences in a concise, high-level, human-readable programming language (also called VML ). A compiler translates the human-readable blocks and command sequences into binary files (the operations products). The flight portion of VML interprets the uplinked binary files. The ground subsystem of VML also includes an interactive sequence- execution tool hosted on workstations, which runs sequences at several thousand times real-time speed, affords debugging, and generates reports. This tool enables iterative development of blocks and sequences within times of the order of seconds.

  13. Pattern recognition & machine learning

    CERN Document Server

    Anzai, Y

    1992-01-01

    This is the first text to provide a unified and self-contained introduction to visual pattern recognition and machine learning. It is useful as a general introduction to artifical intelligence and knowledge engineering, and no previous knowledge of pattern recognition or machine learning is necessary. Basic for various pattern recognition and machine learning methods. Translated from Japanese, the book also features chapter exercises, keywords, and summaries.

  14. Support vector machines applications

    CERN Document Server

    Guo, Guodong

    2014-01-01

    Support vector machines (SVM) have both a solid mathematical background and good performance in practical applications. This book focuses on the recent advances and applications of the SVM in different areas, such as image processing, medical practice, computer vision, pattern recognition, machine learning, applied statistics, business intelligence, and artificial intelligence. The aim of this book is to create a comprehensive source on support vector machine applications, especially some recent advances.

  15. The Newest Machine Material

    International Nuclear Information System (INIS)

    Seo, Yeong Seop; Choe, Byeong Do; Bang, Meong Sung

    2005-08-01

    This book gives descriptions of machine material with classification of machine material and selection of machine material, structure and connection of material, coagulation of metal and crystal structure, equilibrium diagram, properties of metal material, elasticity and plasticity, biopsy of metal, material test and nondestructive test. It also explains steel material such as heat treatment of steel, cast iron and cast steel, nonferrous metal materials, non metallic materials, and new materials.

  16. Introduction to machine learning

    OpenAIRE

    Baştanlar, Yalın; Özuysal, Mustafa

    2014-01-01

    The machine learning field, which can be briefly defined as enabling computers make successful predictions using past experiences, has exhibited an impressive development recently with the help of the rapid increase in the storage capacity and processing power of computers. Together with many other disciplines, machine learning methods have been widely employed in bioinformatics. The difficulties and cost of biological analyses have led to the development of sophisticated machine learning app...

  17. Machinability of advanced materials

    CERN Document Server

    Davim, J Paulo

    2014-01-01

    Machinability of Advanced Materials addresses the level of difficulty involved in machining a material, or multiple materials, with the appropriate tooling and cutting parameters.  A variety of factors determine a material's machinability, including tool life rate, cutting forces and power consumption, surface integrity, limiting rate of metal removal, and chip shape. These topics, among others, and multiple examples comprise this research resource for engineering students, academics, and practitioners.

  18. Machining of titanium alloys

    CERN Document Server

    2014-01-01

    This book presents a collection of examples illustrating the resent research advances in the machining of titanium alloys. These materials have excellent strength and fracture toughness as well as low density and good corrosion resistance; however, machinability is still poor due to their low thermal conductivity and high chemical reactivity with cutting tool materials. This book presents solutions to enhance machinability in titanium-based alloys and serves as a useful reference to professionals and researchers in aerospace, automotive and biomedical fields.

  19. Challenges for coexistence of machine to machine and human to human applications in mobile network

    DEFF Research Database (Denmark)

    Sanyal, R.; Cianca, E.; Prasad, Ramjee

    2012-01-01

    A key factor for the evolution of the mobile networks towards 4G is to bring to fruition high bandwidth per mobile node. Eventually, due to the advent of a new class of applications, namely, Machine-to-Machine, we foresee new challenges where bandwidth per user is no more the primal driver...... be evolved to address various nuances of the mobile devices used by man and machines. The bigger question is as follows. Is the state-of-the-art mobile network designed optimally to cater both the Human-to-Human and Machine-to-Machine applications? This paper presents the primary challenges....... As an immediate impact of the high penetration of M2M devices, we envisage a surge in the signaling messages for mobility and location management. The cell size will shrivel due to high tele-density resulting in even more signaling messages related to handoff and location updates. The mobile network should...

  20. Tribology in machine design

    CERN Document Server

    Stolarski, Tadeusz

    1999-01-01

    ""Tribology in Machine Design is strongly recommended for machine designers, and engineers and scientists interested in tribology. It should be in the engineering library of companies producing mechanical equipment.""Applied Mechanics ReviewTribology in Machine Design explains the role of tribology in the design of machine elements. It shows how algorithms developed from the basic principles of tribology can be used in a range of practical applications within mechanical devices and systems.The computer offers today's designer the possibility of greater stringen