WorldWideScience

Sample records for machine size scaling

  1. Nano-scale machining of polycrystalline coppers - effects of grain size and machining parameters.

    Science.gov (United States)

    Shi, Jing; Wang, Yachao; Yang, Xiaoping

    2013-11-22

    In this study, a comprehensive investigation on nano-scale machining of polycrystalline copper structures is carried out by molecular dynamics (MD) simulation. Simulation cases are constructed to study the impacts of grain size, as well as various machining parameters. Six polycrystalline copper structures are produced, which have the corresponding equivalent grain sizes of 5.32, 6.70, 8.44, 13.40, 14.75, and 16.88 nm, respectively. Three levels of depth of cut, machining speed, and tool rake angle are also considered. The results show that greater cutting forces are required in nano-scale polycrystalline machining with the increase of depth of cut, machining speed, and the use of the negative tool rake angles. The distributions of equivalent stress are consistent with the cutting force trends. Moreover, it is discovered that in the grain size range of 5.32 to 14.75 nm, the cutting forces and equivalent stress increase with the increase of grain size for the nano-structured copper, while the trends reserve after the grain size becomes even higher. This discovery confirms the existence of both the regular Hall-Petch relation and the inverse Hall-Petch relation in polycrystalline machining, and the existence of a threshold grain size allows one of the two relations to become dominant. The dislocation-grain boundary interaction shows that the resistance of the grain boundary to dislocation movement is the fundamental mechanism of the Hall-Petch relation, while grain boundary diffusion and movement is the reason of the inverse Hall-Petch relation.

  2. Meso-scale machining capabilities and issues

    Energy Technology Data Exchange (ETDEWEB)

    BENAVIDES,GILBERT L.; ADAMS,DAVID P.; YANG,PIN

    2000-05-15

    Meso-scale manufacturing processes are bridging the gap between silicon-based MEMS processes and conventional miniature machining. These processes can fabricate two and three-dimensional parts having micron size features in traditional materials such as stainless steels, rare earth magnets, ceramics, and glass. Meso-scale processes that are currently available include, focused ion beam sputtering, micro-milling, micro-turning, excimer laser ablation, femto-second laser ablation, and micro electro discharge machining. These meso-scale processes employ subtractive machining technologies (i.e., material removal), unlike LIGA, which is an additive meso-scale process. Meso-scale processes have different material capabilities and machining performance specifications. Machining performance specifications of interest include minimum feature size, feature tolerance, feature location accuracy, surface finish, and material removal rate. Sandia National Laboratories is developing meso-scale electro-mechanical components, which require meso-scale parts that move relative to one another. The meso-scale parts fabricated by subtractive meso-scale manufacturing processes have unique tribology issues because of the variety of materials and the surface conditions produced by the different meso-scale manufacturing processes.

  3. Machine Learning at Scale

    OpenAIRE

    Izrailev, Sergei; Stanley, Jeremy M.

    2014-01-01

    It takes skill to build a meaningful predictive model even with the abundance of implementations of modern machine learning algorithms and readily available computing resources. Building a model becomes challenging if hundreds of terabytes of data need to be processed to produce the training data set. In a digital advertising technology setting, we are faced with the need to build thousands of such models that predict user behavior and power advertising campaigns in a 24/7 chaotic real-time p...

  4. Online uniform machine covering with the known largest size

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper investigates the semi-online scheduling problem with the known largest size on two uniform machines. The objective is to maximize the minimum machine completion time. Both lower bounds and algorithms are given. Algorithms are optimal for the majority values of s≥1, where s is the speed ratio of the two machines. The largest gap between the competitive ratio and the lower bound is about 0. 064. Moreover, the overall competitive ratio 2 matches the overall lower bound.

  5. Potato Size and Shape Detection Using Machine Vision

    Directory of Open Access Journals (Sweden)

    Liao Guiping

    2015-01-01

    Full Text Available To reduce the error and faster classification by mechanizing in classifying the potato shape and size through machine vision using the extraction of characters procedure to identify the size, and using the shape detection procedure to identify the shape. Test results in potato size detection revealed 40/191 = 0.210mm/pixel as length scale or calibration factor (40/M where 40 is the table tennis ball size (40mm and 191 as image pixels table tennis (M; measurement results revealed that between the algorithm results and the manual measurements, the absolute error was <3mm, while the relative error rate was <4%; and the measurement results based on the ellipse axis length can accurately calculate the actual long axis and short axis of potato. Potato shape detection revealed the analysis of 228 images composed of 114 positive and 114 negatives side, only 2 have been incorrectly classified, mainly because the Extracted ratio (R of the potato image of those two positive and negative images are near 0.67, respectively 0.671887, 0.661063, 0.667604, and 0.67193. The comparison to establish a calibration system method using both basic rectangle and ellipse R ratio methods to detect the potato size and shape, revealed that the basic rectangle method has better effect in the case of fixed place. Moreover, the ellipse axis method was observed to be more stable with an error rate of 7%. Therefore it is recommended that the ellipse axis method should be used to detect the shape of potato for differentiation into round, long cylindrical, and oval shapes, with the accuracy level of 98.8%.

  6. Development of Harvesting Machines for Willow Small-Sizes Plantations in East-Central Europe

    OpenAIRE

    Trzepieciński, Tomasz; Stachowicz, Feliks; Niemiec, Witold; Kępa, Leszek; Dziurka, Marek

    2016-01-01

    The production of plant biomass in small farms within the Central and Eastern European countries requires the application of agricultural machines adjusted to the scale of production. In the article, new machines for small-sized plantations of energy crops have been presented. Furthermore, the results of strength analysis of three-point linkage mower frame are presented by finite element method. The advantage of the proposed solutions is their simple construction, which is connected with low ...

  7. Newton Methods for Large Scale Problems in Machine Learning

    Science.gov (United States)

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  8. Large-Scale Machine Learning for Classification and Search

    Science.gov (United States)

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  9. Large-Scale Machine Learning for Classification and Search

    Science.gov (United States)

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  10. Newton Methods for Large Scale Problems in Machine Learning

    Science.gov (United States)

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  11. Teraflop-scale Incremental Machine Learning

    CERN Document Server

    Özkural, Eray

    2011-01-01

    We propose a long-term memory design for artificial general intelligence based on Solomonoff's incremental machine learning methods. We use R5RS Scheme and its standard library with a few omissions as the reference machine. We introduce a Levin Search variant based on Stochastic Context Free Grammar together with four synergistic update algorithms that use the same grammar as a guiding probability distribution of programs. The update algorithms include adjusting production probabilities, re-using previous solutions, learning programming idioms and discovery of frequent subprograms. Experiments with two training sequences demonstrate that our approach to incremental learning is effective.

  12. Visuomotor Dissociation in Cerebral Scaling of Size

    NARCIS (Netherlands)

    Potgieser, Adriaan R. E.; de Jong, Bauke M.

    2016-01-01

    Estimating size and distance is crucial in effective visuomotor control. The concept of an internal coordinate system implies that visual and motor size parameters are scaled onto a common template. To dissociate perceptual and motor components in such scaling, we performed an fMRI experiment in whi

  13. Visuomotor Dissociation in Cerebral Scaling of Size

    NARCIS (Netherlands)

    Potgieser, Adriaan R. E.; de Jong, Bauke M.

    2016-01-01

    Estimating size and distance is crucial in effective visuomotor control. The concept of an internal coordinate system implies that visual and motor size parameters are scaled onto a common template. To dissociate perceptual and motor components in such scaling, we performed an fMRI experiment in

  14. Size-scaling of tensile failure stress in boron carbide

    Energy Technology Data Exchange (ETDEWEB)

    Wereszczak, Andrew A [ORNL; Kirkland, Timothy Philip [ORNL; Strong, Kevin T [ORNL; Jadaan, Osama M. [University of Wisconsin, Platteville; Thompson, G. A. [U.S. Army Dental and Trauma Research Detachment, Greak Lakes

    2010-01-01

    Weibull strength-size-scaling in a rotary-ground, hot-pressed boron carbide is described when strength test coupons sampled effective areas from the very small (~ 0.001 square millimeters) to the very large (~ 40,000 square millimeters). Equibiaxial flexure and Hertzian testing were used for the strength testing. Characteristic strengths for several different specimen geometries are analyzed as a function of effective area. Characteristic strength was found to substantially increase with decreased effective area, and exhibited a bilinear relationship. Machining damage limited strength as measured with equibiaxial flexure testing for effective areas greater than ~ 1 mm2 and microstructural-scale flaws limited strength for effective areas less than 0.1 mm2 for the Hertzian testing. The selections of a ceramic strength to account for ballistically-induced tile deflection and to account for expanding cavity modeling are considered in context with the measured strength-size-scaling.

  15. Visuomotor Dissociation in Cerebral Scaling of Size.

    Science.gov (United States)

    Potgieser, Adriaan R E; de Jong, Bauke M

    2016-01-01

    Estimating size and distance is crucial in effective visuomotor control. The concept of an internal coordinate system implies that visual and motor size parameters are scaled onto a common template. To dissociate perceptual and motor components in such scaling, we performed an fMRI experiment in which 16 right-handed subjects copied geometric figures while the result of drawing remained out of sight. Either the size of the example figure varied while maintaining a constant size of drawing (visual incongruity) or the size of the examples remained constant while subjects were instructed to make changes in size (motor incongruity). These incongruent were compared to congruent conditions. Statistical Parametric Mapping (SPM8) revealed brain activations related to size incongruity in the dorsolateral prefrontal and inferior parietal cortex, pre-SMA / anterior cingulate and anterior insula, dominant in the right hemisphere. This pattern represented simultaneous use of a 'resized' virtual template and actual picture information requiring spatial working memory, early-stage attention shifting and inhibitory control. Activations were strongest in motor incongruity while right pre-dorsal premotor activation specifically occurred in this condition. Visual incongruity additionally relied on a ventral visual pathway. Left ventral premotor activation occurred in all variably sized drawing while constant visuomotor size, compared to congruent size variation, uniquely activated the lateral occipital cortex additional to superior parietal regions. These results highlight size as a fundamental parameter in both general hand movement and movement guided by objects perceived in the context of surrounding 3D space.

  16. Visuomotor Dissociation in Cerebral Scaling of Size.

    Directory of Open Access Journals (Sweden)

    Adriaan R E Potgieser

    Full Text Available Estimating size and distance is crucial in effective visuomotor control. The concept of an internal coordinate system implies that visual and motor size parameters are scaled onto a common template. To dissociate perceptual and motor components in such scaling, we performed an fMRI experiment in which 16 right-handed subjects copied geometric figures while the result of drawing remained out of sight. Either the size of the example figure varied while maintaining a constant size of drawing (visual incongruity or the size of the examples remained constant while subjects were instructed to make changes in size (motor incongruity. These incongruent were compared to congruent conditions. Statistical Parametric Mapping (SPM8 revealed brain activations related to size incongruity in the dorsolateral prefrontal and inferior parietal cortex, pre-SMA / anterior cingulate and anterior insula, dominant in the right hemisphere. This pattern represented simultaneous use of a 'resized' virtual template and actual picture information requiring spatial working memory, early-stage attention shifting and inhibitory control. Activations were strongest in motor incongruity while right pre-dorsal premotor activation specifically occurred in this condition. Visual incongruity additionally relied on a ventral visual pathway. Left ventral premotor activation occurred in all variably sized drawing while constant visuomotor size, compared to congruent size variation, uniquely activated the lateral occipital cortex additional to superior parietal regions. These results highlight size as a fundamental parameter in both general hand movement and movement guided by objects perceived in the context of surrounding 3D space.

  17. Less is more: regularization perspectives on large scale machine learning

    CERN Document Server

    CERN. Geneva

    2017-01-01

    Deep learning based techniques provide a possible solution at the expanse of theoretical guidance and, especially, of computational requirements. It is then a key challenge for large scale machine learning to devise approaches guaranteed to be accurate and yet computationally efficient. In this talk, we will consider a regularization perspectives on machine learning appealing to classical ideas in linear algebra and inverse problems to scale-up dramatically nonparametric methods such as kernel methods, often dismissed because of prohibitive costs. Our analysis derives optimal theoretical guarantees while providing experimental results at par or out-performing state of the art approaches.

  18. Gott Time Machines, BTZ Black Hole Formation, and Choptuik Scaling

    CERN Document Server

    Birmingham, Daniel; Birmingham, Danny; Sen, Siddhartha

    2000-01-01

    We study the formation of BTZ black holes by the collision of point particles. It is shown that the Gott time machine, originally constructed for the case of vanishing cosmological constant, provides a precise mechanism for black hole formation. As a result, one obtains an exact analytic understanding of the Choptuik scaling.

  19. Gott time machines, BTZ black hole formation, and choptuik scaling

    Science.gov (United States)

    Birmingham; Sen

    2000-02-07

    We study the formation of Banados-Teitelboim-Zanelli black holes by the collision of point particles. It is shown that the Gott time machine, originally constructed for the case of vanishing cosmological constant, provides a precise mechanism for black hole formation. As a result, one obtains an exact analytic understanding of the Choptuik scaling.

  20. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  1. Series Design of Large-Scale NC Machine Tool

    Institute of Scientific and Technical Information of China (English)

    TANG Zhi

    2007-01-01

    Product system design is a mature concept in western developed countries. It has been applied in war industry during the last century. However, up until now, functional combination is still the main method for product system design in China. Therefore, in terms of a concept of product generation and product interaction we are in a weak position compared with the requirements of global markets. Today, the idea of serial product design has attracted much attention in the design field and the definition of product generation as well as its parameters has already become the standard in serial product designs. Although the design of a large-scale NC machine tool is complicated, it can be further optimized by the precise exercise of object design by placing the concept of platform establishment firmly into serial product design. The essence of a serial product design has been demonstrated by the design process of a large-scale NC machine tool.

  2. Building a Large-Scale Knowledge Base for Machine Translation

    CERN Document Server

    Knight, K; Knight, Kevin; Luk, Steve K.

    1994-01-01

    Knowledge-based machine translation (KBMT) systems have achieved excellent results in constrained domains, but have not yet scaled up to newspaper text. The reason is that knowledge resources (lexicons, grammar rules, world models) must be painstakingly handcrafted from scratch. One of the hypotheses being tested in the PANGLOSS machine translation project is whether or not these resources can be semi-automatically acquired on a very large scale. This paper focuses on the construction of a large ontology (or knowledge base, or world model) for supporting KBMT. It contains representations for some 70,000 commonly encountered objects, processes, qualities, and relations. The ontology was constructed by merging various online dictionaries, semantic networks, and bilingual resources, through semi-automatic methods. Some of these methods (e.g., conceptual matching of semantic taxonomies) are broadly applicable to problems of importing/exporting knowledge from one KB to another. Other methods (e.g., bilingual match...

  3. Method for producing fabrication material for constructing micrometer-scaled machines, fabrication material for micrometer-scaled machines

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, F.J.

    1995-12-31

    A method for producing fabrication material for use in the construction of nanometer-scaled machines is provided whereby similar protein molecules are isolated and manipulated at predetermined residue positions so as to facilitate noncovalent interaction, but without compromising the folding configuration or native structure of the original protein biomodules. A fabrication material is also provided consisting of biomodules systematically constructed and arranged at specific solution parameters.

  4. Finite-size scaling at quantum transitions

    Science.gov (United States)

    Campostrini, Massimo; Pelissetto, Andrea; Vicari, Ettore

    2014-03-01

    We develop the finite-size scaling (FSS) theory at quantum transitions. We consider various boundary conditions, such as open and periodic boundary conditions, and characterize the corrections to the leading FSS behavior. Using renormalization-group (RG) theory, we generalize the classical scaling ansatz to describe FSS in the quantum case, classifying the different sources of scaling corrections. We identify nonanalytic corrections due to irrelevant (bulk and boundary) RG perturbations and analytic contributions due to regular backgrounds and analytic expansions of the nonlinear scaling fields. To check the general predictions, we consider the quantum XY chain in a transverse field. For this model exact or numerically accurate results can be obtained by exploiting its fermionic quadratic representation. We study the FSS of several observables, such as the free energy, the energy differences between low-energy levels, correlation functions of the order parameter, etc., confirming the general predictions in all cases. Moreover, we consider bipartite entanglement entropies, which are characterized by the presence of additional scaling corrections, as predicted by conformal field theory.

  5. New Balancing Equipment for Mass Production of Small and Medium-Sized Electrical Machines

    DEFF Research Database (Denmark)

    Argeseanu, Alin; Ritchie, Ewen; Leban, Krisztina Monika

    2010-01-01

    The level of vibration and noise is an important feature. It is good practice to explain the significance of the indicators of the quality of electrical machines. The mass production of small and medium-sized electrical machines demands speed (short typical measurement time), reliability...

  6. Protein domain boundary prediction by combining support vector machine and domain guess by size algorithm

    Institute of Scientific and Technical Information of China (English)

    Dong Qiwen; Wang Xiaolong; Lin Lei

    2007-01-01

    Successful prediction of protein domain boundaries provides valuable information not only for the computational structure prediction of multi-domain proteins but also for the experimental structure determination. A novel method for domain boundary prediction has been presented, which combines the support vector machine with domain guess by size algorithm. Since the evolutional information of multiple domains can be detected by position specific score matrix, the support vector machine method is trained and tested using the values of position specific score matrix generated by PSI-BLAST. The candidate domain boundaries are selected from the output of support vector machine, and are then inputted to domain guess by size algorithm to give the final results of domain boundary prediction. The experimental results show that the combined method outperforms the individual method of both support vector machine and domain guess by size.

  7. Fabrication of large scale nanostructures based on a modified atomic force microscope nanomechanical machining system.

    Science.gov (United States)

    Hu, Z J; Yan, Y D; Zhao, X S; Gao, D W; Wei, Y Y; Wang, J H

    2011-12-01

    The atomic force microscope (AFM) tip-based nanomechanical machining has been demonstrated to be a powerful tool for fabricating complex 2D∕3D nanostructures. But the machining scale is very small, which holds back this technique severely. How to enlarge the machining scale is always a major concern for the researches. In the present study, a modified AFM tip-based nanomechanical machining system is established through combination of a high precision X-Y stage with the moving range of 100 mm × 100 mm and a commercial AFM in order to enlarge the machining scale. It is found that the tracing property of the AFM system is feasible for large scale machining by controlling the constant normal load. Effects of the machining parameters including the machining direction and the tip geometry on the uniform machined depth with a large scale are evaluated. Consequently, a new tip trace and an increasing load scheme are presented to achieve a uniform machined depth. Finally, a polymer nanoline array with the dimensions of 1 mm × 0.7 mm, the line density of 1000 lines/mm and the average machined depth of 150 nm, and a 20 × 20 polymer square holes array with the scale of 380 μm × 380 μm and the average machined depth of 250 nm are machined successfully. The uniform of the machined depths for all the nanostructures is acceptable. Therefore, it is verified that the AFM tip-based nanomechanical machining method can be used to machine millimeter scale nanostructures.

  8. MagLIF scaling on Z and future machines

    Science.gov (United States)

    Slutz, Stephen; Stygar, William; Gomez, Matthew; Campbell, Edward; Peterson, Kyle; Sefkow, Adam; Sinars, Daniel; Vesey, Roger

    2015-11-01

    The MagLIF (Magnetized Liner Inertial Fusion) concept [S.A. Slutz et al Phys. Plasmas 17, 056303, 2010] has demonstrated [M.R. Gomez et al., PRL 113, 155003, 2014] fusion-relevant plasma conditions on the Z machine. We present 2D numerical simulations of the scaling of MagLIF on Z indicating that deuterium/tritium (DT) fusion yields greater than 100 kJ could be possible on Z when operated at a peak current of 25 MA. Much higher yields are predicted for MagLIF driven with larger peak currents. Two high performance pulsed-power machines (Z300 and Z800) have been designed based on Linear Transformer Driver (LTD) technology. The Z300 design would provide approximately 48 MA to a MagLIF load, while Z800 would provide about 66 MA. We used a parameterized Thevenin equivalent circuit to drive a series of 1D and 2D numerical simulations with currents between and beyond these two designs. Our simulations indicate that 5-10 MJ yields may be possible with Z300, while yields of about 1 GJ may be possible with Z800. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  9. Zooniverse - Web scale citizen science with people and machines. (Invited)

    Science.gov (United States)

    Smith, A.; Lynn, S.; Lintott, C.; Simpson, R.

    2013-12-01

    The Zooniverse (zooniverse.org) began in 2007 with the launch of Galaxy Zoo, a project in which more than 175,000 people provided shape analyses of more than 1 million galaxy images sourced from the Sloan Digital Sky Survey. These galaxy 'classifications', some 60 million in total, have since been used to produce more than 50 peer-reviewed publications based not only on the original research goals of the project but also because of serendipitous discoveries made by the volunteer community. Based upon the success of Galaxy Zoo the team have gone on to develop more than 25 web-based citizen science projects, all with a strong research focus in a range of subjects from astronomy to zoology where human-based analysis still exceeds that of machine intelligence. Over the past 6 years Zooniverse projects have collected more than 300 million data analyses from over 1 million volunteers providing fantastically rich datasets for not only the individuals working to produce research from their project but also the machine learning and computer vision research communities. The Zooniverse platform has always been developed to be the 'simplest thing that works' implementing only the most rudimentary algorithms for functionality such as task allocation and user-performance metrics - simplifications necessary to scale the Zooniverse such that the core team of developers and data scientists can remain small and the cost of running the computing infrastructure relatively modest. To date these simplifications have been appropriate for the data volumes and analysis tasks being addressed. This situation however is changing: next generation telescopes such as the Large Synoptic Sky Telescope (LSST) will produce data volumes dwarfing those previously analyzed. If citizen science is to have a part to play in analyzing these next-generation datasets then the Zooniverse will need to evolve into a smarter system capable for example of modeling the abilities of users and the complexities of

  10. Preemptive Semi-online Algorithms for Parallel Machine Scheduling with Known Total Size

    Institute of Scientific and Technical Information of China (English)

    Yong HE; Hao ZHOU; Yi Wei JIANG

    2006-01-01

    This paper investigates preemptive semi-online scheduling problems on m identical parallel machines, where the total size of all jobs is known in advance. The goal is to minimize the maximum machine completion time or maximize the minimum machine completion time. For the first objective,we present an optimal semi-online algorithm with competitive ratio 1. For the second objective, we show that the competitive ratio of any semi-online algorithm is at least 2m-3/m-1 for any m > 2 and presentoptimal semi-online algorithms for m = 2,3.

  11. APPROXIMATION SCHEMES FOR SCHEDULING A BATCHING MACHINE WITH NONIDENTICAL JOB SIZE

    Institute of Scientific and Technical Information of China (English)

    Xianzhao ZHANG; Zengxia CAI; Yuzhong ZHANG; Zhigang CAO

    2007-01-01

    In this paper we study the problem of scheduling a batching machine with nonidentical job sizes. The jobs arrive simultaneously and have unit processing time. The goal is to minimize the total completion times. Having shown that the problem is NP-hard, we put forward three approximation schemes with worst case ratio 4, 2, and 3/2, respectively.

  12. Online tomato sorting based on shape, maturity, size, and surface defects using machine vision

    OpenAIRE

    ARJENAKI, Omid Omidi; MOGHADDAM, Parviz Ahmadi; MOTLAGH, Asad Moddares

    2013-01-01

    Online sorting of tomatoes according to their features is an important postharvest procedure. The purpose of this research was to develop an efficient machine vision-based experimental sorting system for tomatoes. Relevant sorting parameters included shape (oblong and circular), size (small and large), maturity (color), and defects. The variables defining shape, maturity, and size of the tomatoes were eccentricity, average of color components, and 2-D pixel area, respectively. Tomato defects ...

  13. Scaling the drop size in coflow experiments

    Energy Technology Data Exchange (ETDEWEB)

    Castro-Hernandez, E; Gordillo, J M [Area de Mecanica de Fluidos, Universidad de Sevilla, Avenida de los Descubrimientos s/n, 41092 Sevilla (Spain); Gundabala, V; Fernandez-Nieves, A [School of Physics, Georgia Institute of Technology, Atlanta, GA 30332 (United States)], E-mail: jgordill@us.es

    2009-07-15

    We perform extensive experiments with coflowing liquids in microfluidic devices and provide a closed expression for the drop size as a function of measurable parameters in the jetting regime that accounts for the experimental observations; this expression works irrespective of how the jets are produced, providing a powerful design tool for this type of experiments.

  14. Scaling of Seismic Memory with Earthquake Size

    CERN Document Server

    Zheng, Zeyu; Tenenbaum, Joel; Podobnik, Boris; Stanley, H Eugene

    2011-01-01

    It has been observed that the earthquake events possess short-term memory, i.e. that events occurring in a particular location are dependent on the short history of that location. We conduct an analysis to see whether real-time earthquake data also possess long-term memory and, if so, whether such autocorrelations depend on the size of earthquakes within close spatiotemporal proximity. We analyze the seismic waveform database recorded by 64 stations in Japan, including the 2011 "Great East Japan Earthquake", one of the five most powerful earthquakes ever recorded which resulted in a tsunami and devastating nuclear accidents. We explore the question of seismic memory through use of mean conditional intervals and detrended fluctuation analysis (DFA). We find that the waveform sign series show long-range power-law anticorrelations while the interval series show long-range power-law correlations. We find size-dependence in earthquake auto-correlations---as earthquake size increases, both of these correlation beha...

  15. Scaling Datalog for Machine Learning on Big Data

    CERN Document Server

    Bu, Yingyi; Carey, Michael J; Rosen, Joshua; Polyzotis, Neoklis; Condie, Tyson; Weimer, Markus; Ramakrishnan, Raghu

    2012-01-01

    In this paper, we present the case for a declarative foundation for data-intensive machine learning systems. Instead of creating a new system for each specific flavor of machine learning task, or hardcoding new optimizations, we argue for the use of recursive queries to program a variety of machine learning systems. By taking this approach, database query optimization techniques can be utilized to identify effective execution plans, and the resulting runtime plans can be executed on a single unified data-parallel query processing engine. As a proof of concept, we consider two programming models--Pregel and Iterative Map-Reduce-Update---from the machine learning domain, and show how they can be captured in Datalog, tuned for a specific task, and then compiled into an optimized physical plan. Experiments performed on a large computing cluster with real data demonstrate that this declarative approach can provide very good performance while offering both increased generality and programming ease.

  16. Development of meso-scale milling machine tool and its performance analysis

    Institute of Scientific and Technical Information of China (English)

    Hongtao LI; Xinmin LAI; Chengfeng LI; Zhongqin LIN; Jiancheng MIAO; Jun NI

    2008-01-01

    To overcome the shortcomings of current technologies for meso-scale manufacturing such as MEMS and ultra precision machining, this paper focuses on the investigations on the meso milling process with a miniaturized machine tool. First, the related technologies for the process mechanism studies are investigated based on the analysis of the characteristics of the meso milling process. An overview of the key issues is presented and research approaches are also proposed. Then, a meso-scale milling machine tool system is developed. The subsystems and their specifications are described in detail. Finally, some tests are conducted to evaluate the performance of the system. These tests consist of precision measurement of the positioning subsystem, the test for machining precision evaluation, and the experiments for machining mechanical parts with com-plex features. Through test analysis, the meso milling process with a miniaturized machine tool is proved to be feasible and applicable for meso manufacturing.

  17. Machine translation of TV subtitles for large scale production

    OpenAIRE

    Volk, Martin; Sennrich, Rico; Hardmeier, Christian; Tidström, Frida

    2010-01-01

    This paper describes our work on building and employing Statistical Machine Translation systems for TV subtitles in Scandinavia. We have built translation systems for Danish, English, Norwegian and Swedish. They are used in daily subtitle production and translate large volumes. As an example we report on our evaluation results for three TV genres. We discuss our lessons learned in the system development process which shed interesting light on the practical use of Machine Translation technology.

  18. Optimal Preemptive Online Algorithms for Scheduling with Known Largest Size on Two Uniform Machines

    Institute of Scientific and Technical Information of China (English)

    Yong HE; Yi Wei JIANG; Hao ZHOU

    2007-01-01

    In this paper, we consider the semi-online preemptive scheduling problem with known largest job sizes on two uniform machines. Our goal is to maximize the continuous period of time (starting from time zero) when both machines are busy, which is equivalent to maximizing the minimummachine completion time if idle time is not introduced. We design optimal deterministic semi-onlinealgorithms for every machine speed ratio s ∈ [1, ∞), and show that idle time is required to achieve the optimality during the assignment procedure of the algorithm for any s (s2 + 3s + 1)/(s2 + 2s + 1).The competitive ratio of the algorithms is (s2 + 3s + 1)/(s2 + 2s + 1), which matches the randomized lower bound for every s ≥ 1. Hence randomization does not help for the discussed preemptive scheduling problem.

  19. ScaleMT: a free/open-source framework for building scalable machine translation web services

    OpenAIRE

    Sánchez-Cartagena, Víctor M.; Pérez-Ortiz, Juan Antonio

    2009-01-01

    Machine translation web services usage is growing amazingly mainly because of the translation quality and reliability of the service provided by the Google Ajax Language API. To allow the open-source machine ranslation projects to compete with Google’s one and gain visibility on the internet, we have developed ScaleMT: a free/open-source framework that exposes existing machine translation engines as public web services. This framework is highly scalable as it can run coordinately on many serv...

  20. DEVELOPMENT OF SMALL INJECTION MOULDING MACHINE FOR FORMING SMALL PLASTIC ARTICLES FOR SMALL-SCALE INDUSTRIES

    OpenAIRE

    OYETUNJI, A.

    2010-01-01

    Development of small injection moulding machine for forming small plastic articles in small-scale industries was studied. This work which entailed design, construction and test small injection moulding machine that was capable of forming small plastic articles by injecting molten resins into a closed, cooled mould, where it solidifies to give the desired products was developed. The machine was designed and constructed to work as a prototype for producing very small plastic components. Design ...

  1. TensorFlow: A system for large-scale machine learning

    OpenAIRE

    2016-01-01

    TensorFlow is a machine learning system that operates at large scale and in heterogeneous environments. TensorFlow uses dataflow graphs to represent computation, shared state, and the operations that mutate that state. It maps the nodes of a dataflow graph across many machines in a cluster, and within a machine across multiple computational devices, including multicore CPUs, general-purpose GPUs, and custom designed ASICs known as Tensor Processing Units (TPUs). This architecture gives flexib...

  2. Scale effects and a method for similarity evaluation in micro electrical discharge machining

    Science.gov (United States)

    Liu, Qingyu; Zhang, Qinhe; Wang, Kan; Zhu, Guang; Fu, Xiuzhuo; Zhang, Jianhua

    2016-08-01

    Electrical discharge machining(EDM) is a promising non-traditional micro machining technology that offers a vast array of applications in the manufacturing industry. However, scale effects occur when machining at the micro-scale, which can make it difficult to predict and optimize the machining performances of micro EDM. A new concept of "scale effects" in micro EDM is proposed, the scale effects can reveal the difference in machining performances between micro EDM and conventional macro EDM. Similarity theory is presented to evaluate the scale effects in micro EDM. Single factor experiments are conducted and the experimental results are analyzed by discussing the similarity difference and similarity precision. The results show that the output results of scale effects in micro EDM do not change linearly with discharge parameters. The values of similarity precision of machining time significantly increase when scaling-down the capacitance or open-circuit voltage. It is indicated that the lower the scale of the discharge parameter, the greater the deviation of non-geometrical similarity degree over geometrical similarity degree, which means that the micro EDM system with lower discharge energy experiences more scale effects. The largest similarity difference is 5.34 while the largest similarity precision can be as high as 114.03. It is suggested that the similarity precision is more effective in reflecting the scale effects and their fluctuation than similarity difference. Consequently, similarity theory is suitable for evaluating the scale effects in micro EDM. This proposed research offers engineering values for optimizing the machining parameters and improving the machining performances of micro EDM.

  3. Finite data-size scaling of clustering in earthquake networks

    CERN Document Server

    Abe, Sumiyoshi; Suzuki, Norikazu

    2010-01-01

    Earthquake network introduced in the work [S. Abe and N. Suzuki, Europhys.Lett. 65, 581 (2004)] is known to be of the small-world type. The values of the network characteristics, however, depend not only on the cell size (i.e., the scale of coarse graining needed for constructing the network) but also on the size of a seismic data set. Here, discovery of a scaling law for the clustering coefficient in terms of the data size, which is refereed to here as finite data-size scaling, is reported. Its universality is shown to be supported by the detailed analysis of the data taken from California, Japan, and Iran.

  4. Large-scale Machine Learning in High-dimensional Datasets

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen

    Over the last few decades computers have gotten to play an essential role in our daily life, and data is now being collected in various domains at a faster pace than ever before. This dissertation presents research advances in four machine learning fields that all relate to the challenges imposed...... are better at modeling local heterogeneities. In the field of machine learning for neuroimaging, we introduce learning protocols for real-time functional Magnetic Resonance Imaging (fMRI) that allow for dynamic intervention in the human decision process. Specifically, the model exploits the structure of f...

  5. High-precision micro/nano-scale machining system

    Science.gov (United States)

    Kapoor, Shiv G.; Bourne, Keith Allen; DeVor, Richard E.

    2014-08-19

    A high precision micro/nanoscale machining system. A multi-axis movement machine provides relative movement along multiple axes between a workpiece and a tool holder. A cutting tool is disposed on a flexible cantilever held by the tool holder, the tool holder being movable to provide at least two of the axes to set the angle and distance of the cutting tool relative to the workpiece. A feedback control system uses measurement of deflection of the cantilever during cutting to maintain a desired cantilever deflection and hence a desired load on the cutting tool.

  6. Size structuring and allometric scaling relationships in coral reef fishes.

    Science.gov (United States)

    Dunic, Jillian C; Baum, Julia K

    2017-05-01

    Temperate marine fish communities are often size-structured, with predators consuming increasingly larger prey and feeding at higher trophic levels as they grow. Gape limitation and ontogenetic diet shifts are key mechanisms by which size structuring arises in these communities. Little is known, however, about size structuring in coral reef fishes. Here, we aimed to advance understanding of size structuring in coral reef food webs by examining the evidence for these mechanisms in two groups of reef predators. Given the diversity of feeding modes amongst coral reef fishes, we also compared gape size-body size allometric relationships across functional groups to determine whether they are reliable indicators of size structuring. We used gut content analysis and quantile regressions of predator size-prey size relationships to test for evidence of gape limitation and ontogenetic niche shifts in reef piscivores (n = 13 species) and benthic invertivores (n = 3 species). We then estimated gape size-body size allometric scaling coefficients for 21 different species from four functional groups, including herbivores/detritivores, which are not expected to be gape-limited. We found evidence of both mechanisms for size structuring in coral reef piscivores, with maximum prey size scaling positively with predator body size, and ontogenetic diet shifts including prey type and expansion of prey size. There was, however, little evidence of size structuring in benthic invertivores. Across species and functional groups, absolute and relative gape sizes were largest in piscivores as expected, but gape size-body size scaling relationships were not indicative of size structuring. Instead, relative gape sizes and mouth morphologies may be better indicators. Our results provide evidence that coral reef piscivores are size-structured and that gape limitation and ontogenetic niche shifts are the mechanisms from which this structure arises. Although gape allometry was not indicative of

  7. Machining, Assembly, and Characterization of a Meso-Scale Double Shell Target

    Energy Technology Data Exchange (ETDEWEB)

    Bono, M J; Hibbard, R L

    2003-10-21

    Several issues related to the manufacture of precision meso-scale assemblies have been identified as part of an effort to fabricate an assembly consisting of machined polymer hemispherical shells and machined aerogel. The assembly, a double shell laser target, is composed of concentric spherical layers that were machined on a lathe and then assembled. This production effort revealed several meso-scale manufacturing techniques that worked well, such as the machining of aerogel with cutting tools to form low density structures, and the development of an assembly manipulator that allows control of the assembly forces to within a few milliNewtons. Limitations on the use of vacuum chucks for meso-scale components were also identified. Many of the lessons learned in this effort are not specific to double shell targets and may be relevant to the production of other meso-scale devices.

  8. Scale factor characteristics of laser gyroscopes of different sizes

    Science.gov (United States)

    Fan, Zhenfang; Lu, Guangfeng; Hu, Shomin; Wang, Zhiguo; Luo, Hui

    2016-04-01

    The scale factor correction characteristics of two ring laser gyroscopes of different sizes are investigated systematically in this paper. The variation in the scale factor can reach 144 or 70 ppm for square gyroscopes with arm lengths of 8.4 cm or 15.6 cm, respectively, during frequency tuning. A dip in the scale factor is observed at the line center of the gain characteristic for both gyroscope sizes. When a different longitudinal mode is excited, the scale factor behavior remains the same, but the scale factor values differ slightly from those derived from geometric prediction. The scale factor tends to decrease with increasing discharge current, but the sensitivity of the scale factor to variations in the excitation decreases with increasing discharge current.

  9. Scaling Support Vector Machines On Modern HPC Platforms

    Energy Technology Data Exchange (ETDEWEB)

    You, Yang; Fu, Haohuan; Song, Shuaiwen; Randles, Amanda; Kerbyson, Darren J.; Marquez, Andres; Yang, Guangwen; Hoisie, Adolfy

    2015-02-01

    We designed and implemented MIC-SVM, a highly efficient parallel SVM for x86 based multicore and many-core architectures, such as the Intel Ivy Bridge CPUs and Intel Xeon Phi co-processor (MIC). We propose various novel analysis methods and optimization techniques to fully utilize the multilevel parallelism provided by these architectures and serve as general optimization methods for other machine learning tools.

  10. Size-Scaling of Tensile Failure Stress in a Hot-Pressed Silicon Carbide

    Energy Technology Data Exchange (ETDEWEB)

    Wereszczak, Andrew A [ORNL; Kirkland, Timothy Philip [ORNL; Strong, Kevin T [ORNL; Campbell, James [U.S. Army research Laboratory, Adelphi, MD; LaSalvia, Jerry [U.S. Army research Laboratory, Adelphi, MD; Miller, Herbert [U.S. Army research Laboratory, Adelphi, MD

    2010-01-01

    Quasi-static Weibull strength-size scaling of hot-pressed silicon carbide is described. Two surface conditions (uniaxial ground and uniaxial ground followed by grit blasting) were explored. Strength test coupons sampled effective areas from the very small (4 x 10{sup -3} mm{sup 2}) to the very large (4 x 10{sup 4} mm{sup 2}). Equibiaxial flexure and Hertzian ring crack initiation were used for the strength tests, and characteristic strengths for several different specimen geometries were analyzed as a function of effective area. Characteristic strength was found to substantially increase with decreased effective area for both surface conditions. Weibull moduli of 9.4- and 11.7 well-represented strength-size scaling for the two ground conditions between an effective area range of 10{sup -1} and 4 x 10{sup 4} mm{sup 2}. Machining damage was observed to be the dominant flaw type over this range. However, for effective areas <10{sup -1} mm{sup 2}, the characteristic strength increased rapidly for both ground surface conditions as the effective area decreased, and one or more of the inherent assumptions behind the classical Weibull strength-size scaling were in violation in this range. The selections of a ceramic strength to account for ballistically induced tile deflection and expanding cavity modeling are considered in context with the measured strength-size scaling. The observed size-scaling is briefly discussed with reference to dynamic strength.

  11. A font and size-independent OCR system for printed Kannada documents using support vector machines

    Indian Academy of Sciences (India)

    T V Ashwin; P S Sastry

    2002-02-01

    This paper describes an OCR system for printed text documents in Kannada, a South Indian language. The input to the system would be the scanned image of a page of text and the output is a machine editable file compatible with most typesetting software. The system first extracts words from the document image and then segments the words into sub-character level pieces. The segmentation algorithm is motivated by the structure of the script. We propose a novel set of features for the recognition problem which are computationally simple to extract. The final recognition is achieved by employing a number of 2-class classifiers based on the Support Vector Machine (SVM) method. The recognition is independent of the font and size of the printed text and the system is seen to deliver reasonable performance.

  12. Gene prediction in metagenomic fragments: A large scale machine learning approach

    Directory of Open Access Journals (Sweden)

    Morgenstern Burkhard

    2008-04-01

    Full Text Available Abstract Background Metagenomics is an approach to the characterization of microbial genomes via the direct isolation of genomic sequences from the environment without prior cultivation. The amount of metagenomic sequence data is growing fast while computational methods for metagenome analysis are still in their infancy. In contrast to genomic sequences of single species, which can usually be assembled and analyzed by many available methods, a large proportion of metagenome data remains as unassembled anonymous sequencing reads. One of the aims of all metagenomic sequencing projects is the identification of novel genes. Short length, for example, Sanger sequencing yields on average 700 bp fragments, and unknown phylogenetic origin of most fragments require approaches to gene prediction that are different from the currently available methods for genomes of single species. In particular, the large size of metagenomic samples requires fast and accurate methods with small numbers of false positive predictions. Results We introduce a novel gene prediction algorithm for metagenomic fragments based on a two-stage machine learning approach. In the first stage, we use linear discriminants for monocodon usage, dicodon usage and translation initiation sites to extract features from DNA sequences. In the second stage, an artificial neural network combines these features with open reading frame length and fragment GC-content to compute the probability that this open reading frame encodes a protein. This probability is used for the classification and scoring of gene candidates. With large scale training, our method provides fast single fragment predictions with good sensitivity and specificity on artificially fragmented genomic DNA. Additionally, this method is able to predict translation initiation sites accurately and distinguishes complete from incomplete genes with high reliability. Conclusion Large scale machine learning methods are well-suited for gene

  13. Size structure, not metabolic scaling rules, determines fisheries reference points

    DEFF Research Database (Denmark)

    Andersen, Ken Haste; Beyer, Jan

    2015-01-01

    that even though small species have a higher productivity than large species their resilience towards fishing is lower than expected from metabolic scaling rules. Further, we show that the fishing mortality leading to maximum yield per recruit is an ill-suited reference point. The theory can be used...... these empirical relations is lacking. Here, we combine life-history invariants, metabolic scaling and size-spectrum theory to develop a general size- and trait-based theory for demography and recruitment of exploited fish stocks. Important concepts are physiological or metabolic scaled mortalities and flux...... of individuals or their biomass to size. The theory is based on classic metabolic relations at the individual level and uses asymptotic size W∞ as a trait. The theory predicts fundamental similarities and differences between small and large species in vital rates and response to fishing. The central result...

  14. Fast and Accurate Support Vector Machines on Large Scale Systems

    Energy Technology Data Exchange (ETDEWEB)

    Vishnu, Abhinav; Narasimhan, Jayenthi; Holder, Larry; Kerbyson, Darren J.; Hoisie, Adolfy

    2015-09-08

    Support Vector Machines (SVM) is a supervised Machine Learning and Data Mining (MLDM) algorithm, which has become ubiquitous largely due to its high accuracy and obliviousness to dimensionality. The objective of SVM is to find an optimal boundary --- also known as hyperplane --- which separates the samples (examples in a dataset) of different classes by a maximum margin. Usually, very few samples contribute to the definition of the boundary. However, existing parallel algorithms use the entire dataset for finding the boundary, which is sub-optimal for performance reasons. In this paper, we propose a novel distributed memory algorithm to eliminate the samples which do not contribute to the boundary definition in SVM. We propose several heuristics, which range from early (aggressive) to late (conservative) elimination of the samples, such that the overall time for generating the boundary is reduced considerably. In a few cases, a sample may be eliminated (shrunk) pre-emptively --- potentially resulting in an incorrect boundary. We propose a scalable approach to synchronize the necessary data structures such that the proposed algorithm maintains its accuracy. We consider the necessary trade-offs of single/multiple synchronization using in-depth time-space complexity analysis. We implement the proposed algorithm using MPI and compare it with libsvm--- de facto sequential SVM software --- which we enhance with OpenMP for multi-core/many-core parallelism. Our proposed approach shows excellent efficiency using up to 4096 processes on several large datasets such as UCI HIGGS Boson dataset and Offending URL dataset.

  15. Determination of sample size in genome-scale RNAi screens.

    Science.gov (United States)

    Zhang, Xiaohua Douglas; Heyse, Joseph F

    2009-04-01

    For genome-scale RNAi research, it is critical to investigate sample size required for the achievement of reasonably low false negative rate (FNR) and false positive rate. The analysis in this article reveals that current design of sample size contributes to the occurrence of low signal-to-noise ratio in genome-scale RNAi projects. The analysis suggests that (i) an arrangement of 16 wells per plate is acceptable and an arrangement of 20-24 wells per plate is preferable for a negative control to be used for hit selection in a primary screen without replicates; (ii) in a confirmatory screen or a primary screen with replicates, a sample size of 3 is not large enough, and there is a large reduction in FNRs when sample size increases from 3 to 4. To search a tradeoff between benefit and cost, any sample size between 4 and 11 is a reasonable choice. If the main focus is the selection of siRNAs with strong effects, a sample size of 4 or 5 is a good choice. If we want to have enough power to detect siRNAs with moderate effects, sample size needs to be 8, 9, 10 or 11. These discoveries about sample size bring insight to the design of a genome-scale RNAi screen experiment.

  16. Machine Learning for Big Data: A Study to Understand Limits at Scale

    Energy Technology Data Exchange (ETDEWEB)

    Sukumar, Sreenivas R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Del-Castillo-Negrete, Carlos Emilio [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-21

    This report aims to empirically understand the limits of machine learning when applied to Big Data. We observe that recent innovations in being able to collect, access, organize, integrate, and query massive amounts of data from a wide variety of data sources have brought statistical data mining and machine learning under more scrutiny, evaluation and application for gleaning insights from the data than ever before. Much is expected from algorithms without understanding their limitations at scale while dealing with massive datasets. In that context, we pose and address the following questions How does a machine learning algorithm perform on measures such as accuracy and execution time with increasing sample size and feature dimensionality? Does training with more samples guarantee better accuracy? How many features to compute for a given problem? Do more features guarantee better accuracy? Do efforts to derive and calculate more features and train on larger samples worth the effort? As problems become more complex and traditional binary classification algorithms are replaced with multi-task, multi-class categorization algorithms do parallel learners perform better? What happens to the accuracy of the learning algorithm when trained to categorize multiple classes within the same feature space? Towards finding answers to these questions, we describe the design of an empirical study and present the results. We conclude with the following observations (i) accuracy of the learning algorithm increases with increasing sample size but saturates at a point, beyond which more samples do not contribute to better accuracy/learning, (ii) the richness of the feature space dictates performance - both accuracy and training time, (iii) increased dimensionality often reflected in better performance (higher accuracy in spite of longer training times) but the improvements are not commensurate the efforts for feature computation and training and (iv) accuracy of the learning algorithms

  17. Droplet size measurements for spray dryer scale-up.

    Science.gov (United States)

    Thybo, Pia; Hovgaard, Lars; Andersen, Sune Klint; Lindeløv, Jesper Saederup

    2008-01-01

    This study was dedicated to facilitate scale-up in spray drying from an atomization standpoint. The purpose was to investigate differences in operating conditions between a pilot and a production scale nozzle. The intension was to identify the operating ranges in which the two nozzles produced similar droplet size distributions. Furthermore, method optimization and validation were also covered. Externally mixing two-fluid nozzles of similar designs were used in this study. Both nozzles are typically used in commercially available spray dryers, and they have been characterized with respect to droplet size distributions as a function of liquid type, liquid flow rate, atomization gas flow rate, liquid orifice diameter, and atomization gas orifice diameter. All droplet size measurements were carried out by using the Malvern Spraytec with nozzle operating conditions corresponding to typical settings for spray drying. This gave droplets with Sauter Mean Diameters less than 40 microm and typically 5-20 microm. A model previously proposed by Mansour and Chigier was used to correlate the droplet size to the operating parameters. It was possible to make a correlation for water incorporating the droplet sizes for both the pilot scale and the production scale nozzle. However, a single correlation was not able to account properly for the physical properties of the liquid to be atomized. Therefore, the droplet size distributions of ethanol could not be adequately predicted on the basis of the water data. This study has shown that it was possible to scale up from a pilot to production scale nozzle in a systematic fashion. However, a prerequisite was that the nozzles were geometrically similar. When externally mixing two-fluid nozzles are used as atomizers, the results obtained from this study could be a useful guideline for selecting appropriate operating conditions when scaling up the spray-drying process.

  18. Paleowattmeters: A scaling relation for dynamically recrystallized grain size

    Science.gov (United States)

    Austin, Nicholas J.; Evans, Brian

    2007-04-01

    During dislocation creep, mineral grains often evolve to a stable size, dictated by the deformation conditions. We suggest that grain-size evolution during deformation is determined by the rate of mechanical work. Provided that other elements of microstructure have achieved steady state and that the dissipation rate is roughly constant, then changes in internal energy will be proportional to changes in grain-boundary area. If normal grain-growth and dynamic grain-size reduction occur simultaneously, then the steady-state grain size is determined by the balance of those rates. A scaling model using these assumptions and published grain-growth and mechanical relations matches stress grain-size relations for quartz and olivine rocks with no fitting. For marbles, the model also explains scatter not rationalized by assuming that recrystallized grain size is a function of stress alone. When extrapolated to conditions typical for natural mylonites, the model is consistent with field constraints on stresses and strain rates.

  19. Dynamic finite-size scaling at first-order transitions

    Science.gov (United States)

    Pelissetto, Andrea; Vicari, Ettore

    2017-07-01

    We investigate the dynamic behavior of finite-size systems close to a first-order transition (FOT). We develop a dynamic finite-size scaling (DFSS) theory for the dynamic behavior in the coexistence region where different phases coexist. This is characterized by an exponentially large time scale related to the tunneling between the two phases. We show that, when considering time scales of the order of the tunneling time, the dynamic behavior can be described by a two-state coarse-grained dynamics. This allows us to obtain exact predictions for the dynamical scaling functions. To test the general DFSS theory at FOTs, we consider the two-dimensional Ising model in the low-temperature phase, where the external magnetic field drives a FOT, and the 20-state Potts model, which undergoes a thermal FOT. Numerical results for a purely relaxational dynamics fully confirm the general theory.

  20. A Multilevel Design Method of Large-scale Machine System Oriented Network Environment

    Institute of Scientific and Technical Information of China (English)

    LI Shuiping; HE Jianjun

    2006-01-01

    The design of large-scale machine system is a very complex problem. These design problems usually have a lot of design variables and constraints so that they are difficult to be solved rapidly and efficiently by using conventional methods. In this paper, a new multilevel design method oriented network environment is proposed, which maps the design problem of large-scale machine system into a hypergraph with degree of linking strength (DLS) between vertices. By decomposition of hypergraph, this method can divide the complex design problem into some small and simple subproblems that can be solved concurrently in a network.

  1. Scale invariance of incident size distributions in response to sizes of their causes.

    Science.gov (United States)

    Englehardt, James D

    2002-04-01

    Incidents can be defined as low-probability, high-consequence events and lesser events of the same type. Lack of data on extremely large incidents makes it difficult to determine distributions of incident size that reflect such disasters, even though they represent the great majority of total losses. If the form of the incident size distribution can be determined, then predictive Bayesian methods can be used to assess incident risks from limited available information. Moreover, incident size distributions have generally been observed to have scale invariant, or power law, distributions over broad ranges. Scale invariance in the distributions of sizes of outcomes of complex dynamical systems has been explained based on mechanistic models of natural and built systems, such as models of self-organized criticality. In this article, scale invariance is shown to result also as the maximum Shannon entropy distribution of incident sizes arising as the product of arbitrary functions of cause sizes. Entropy is shown by simulation and derivation to be maximized as a result of dependence, diversity, abundance, and entropy of multiplicative cause sizes. The result represents an information-theoretic explanation of invariance, parallel to those of mechanistic models. For example, distributions of incident size resulting from 30 partially dependent causes are shown to be scale invariant over several orders of magnitude. Empirical validation of power law distributions of incident size is reviewed, and the Pareto (power law) distribution is validated against oil spill, hurricane, and insurance data. The applicability of the Pareto distribution, in particular, for assessment of total losses over a planning period is discussed. Results justify the use of an analytical, predictive Bayesian version of the Pareto distribution, derived previously, to assess incident risk from available data.

  2. Conformal scaling and the size of m-hadrons

    Science.gov (United States)

    Del Debbio, Luigi; Zwicky, Roman

    2014-01-01

    The scaling laws in an IR theory are dictated by the critical exponents of relevant operators. We have investigated these scaling laws at leading order in two previous papers. In this work we investigate further consequences of the scaling laws, trying to identify potential signatures that could be studied by lattice simulations. From the first derivative of the form factor we derive the behavior of the mean charge radius of the hadronic states in the theory. We obtain ⟨rH2⟩˜m-2/(1+γm*) which is consistent with ⟨rH2⟩˜1/MH2. The mean charge radius can be used as an alternative observable to assess the size of the physical states, and hence finite size effects, in numerical simulations. Furthermore, we discuss the behavior of specific field correlators in coordinate space for the case of conformal, scale-invariant, and confining theories making use of selection rules in scaling dimensions and spin. We compute the scaling corrections to correlations functions by linearizing the renormalization group equations. We find that these corrections are potentially large close to the edge of the conformal window. As an application we compute the scaling correction to the formula MH˜m1/(1+γm*) directly through its associated correlator as well as through the trace anomaly. The two computations are shown to be equivalent through a generalization of the Feynman-Hellmann theorem for the fermion mass and the gauge coupling.

  3. Finite size scaling in the planar Lebwohl-Lasher model

    Science.gov (United States)

    Mondal, Enakshi; Roy, Soumen Kumar

    2003-06-01

    The standard finite size scaling method for second order phase transition has been applied to Monte Carlo data obtained for a planar Lebwohl-Lasher lattice model using the Wolff cluster algorithm. We obtain Tc and the exponents γ, ν, and z and the results are different from those obtained by other investigators.

  4. Multi-Scale Analysis Based Ball Bearing Defect Diagnostics Using Mahalanobis Distance and Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Chun-Chieh Wang

    2013-01-01

    Full Text Available The objective of this research is to investigate the feasibility of utilizing the multi-scale analysis and support vector machine (SVM classification scheme to diagnose the bearing faults in rotating machinery. For complicated signals, the characteristics of dynamic systems may not be apparently observed in a scale, particularly for the fault-related features of rotating machinery. In this research, the multi-scale analysis is employed to extract the possible fault-related features in different scales, such as the multi-scale entropy (MSE, multi-scale permutation entropy (MPE, multi-scale root-mean-square (MSRMS and multi-band spectrum entropy (MBSE. Some of the features are then selected as the inputs of the support vector machine (SVM classifier through the Fisher score (FS as well as the Mahalanobis distance (MD evaluations. The vibration signals of bearing test data at Case Western Reserve University (CWRU are utilized as the illustrated examples. The analysis results demonstrate that an accurate bearing defect diagnosis can be achieved by using the extracted machine features in different scales. It can be also noted that the diagnostic results of bearing faults can be further enhanced through the feature selection procedures of FS and MD evaluations.

  5. Micro to nano: Surface size scale and superhydrophobicity

    Directory of Open Access Journals (Sweden)

    Christian Dorrer

    2011-06-01

    Full Text Available This work looks at the fundamental question of how the surface mobility of drops in the composite state is related to the size scale of the roughness features of the surface. To this end, relevant literature is first reviewed and the important terms are clarified. We then describe and discuss contact and roll-off angle measurements on a set of hydrophobicized silicon post surfaces for which all parameters except for the surface size scale were held constant. It was found that a critical transition from “sticky superhydrophobic” (composite state with large contact angle hysteresis to “truly superhydrophobic” (composite state with low hysteresis takes place as the size of the surface features reaches 1 μm.

  6. Small-time scale network traffic prediction based on a local support vector machine regression model

    Institute of Scientific and Technical Information of China (English)

    Meng Qing-Fang; Chen Yue-Hui; Peng Yu-Hua

    2009-01-01

    In this paper we apply the nonlinear time series analysis method to small-time scale traffic measurement data. The prediction-based method is used to determine the embedding dimension of the traffic data. Based on the reconstructed phase space, the local support vector machine prediction method is used to predict the traffic measurement data, and the BIC-based neighbouring point selection method is used to choose the number of the nearest neighbouring points for the local support vector machine regression model. The experimental results show that the local support vector machine prediction method whose neighbouring points are optimized can effectively predict the small-time scale traffic measurement data and can reproduce the statistical features of real traffic measurements.

  7. Beliefs about Penis Size:Validation of a Scale for Men Ashamed about Their Penis Size

    OpenAIRE

    Veale, David; Eshkevari, Ertimiss; Read, Julie; Miles, Sarah; Troglia, Andrea; Phillips, Rachael; Echeverria, Lina Maria Carmona; Fiorito, Chiara; Wylie, Kevan; Muir, Gordon

    2014-01-01

    IntroductionNo measures are available for understanding beliefs in men who experience shame about the perceived size of their penis. Such a measure might be helpful for treatment planning, and measuring outcome after any psychological or physical intervention.AimOur aim was to validate a newly developed measure called the Beliefs about Penis Size Scale (BAPS).MethodOne hundred seventy-three male participants completed a new questionnaire consisting of 18 items to be validated and developed in...

  8. Scaling relationships between sizes of nucleation regions and eventual sizes of microearthquakes

    Science.gov (United States)

    Hiramatsu, Yoshihiro; Furumoto, Muneyoshi

    2007-10-01

    We investigate the initial rupture process of microearthquakes to reveal relationships between nucleation region sizes and eventual earthquake sizes. In order to obtain high quality waveform data, we installed a trigger recording system with a sampling frequency of 10 kHz at the base of a deep borehole at the Nojima Fault, Japan. We analyze waveform data of 31 events around the borehole, with seismic moment ranging from 4.2 × 10 9 Nm to 7.1 × 10 11 Nm. We use both a circular crack model with an accelerating rupture velocity (SK model) [Sato, T., Kanamori, H., 1999. Beginning of earthquakes modeled with the Griffith's fracture criterion, Bull. Seism. Soc. Am., 89, 80-93.], which generates a slow initial phase of velocity pulse, and a circular crack model with a constant rupture velocity (SH model) [Sato, T, Hirasawa, T., 1973. Body wave spectra from propagating shear cracks, J. Phys. Earth, 21, 415-431.], which generates a ramp-like velocity pulse. Source parameters of these two models are estimated by waveform inversion of the first half cycle of the observed velocity pulse applying both a grid search and a non-linear least squares method. 14 of 31 events are never reproduced by the SH model with a constant Q operator. But SK model with a constant Q operator provides a size of the pre-existing crack, corresponding to the size of the nucleation regions, and a size of the eventual crack. We recognize that (i) the eventual seismic moment is approximately scaled as the cube of the size of pre-existing cracks, (ii) the eventual seismic moment is scaled as the cube of the size of eventual cracks, and (iii) the size of eventual cracks is roughly proportional to the size of pre-existing cracks. We, thus, conclude that the size of eventual earthquakes is controlled by the size of the nucleation regions.

  9. DEVELOPMENT OF SMALL INJECTION MOULDING MACHINE FOR FORMING SMALL PLASTIC ARTICLES FOR SMALL-SCALE INDUSTRIES

    Directory of Open Access Journals (Sweden)

    OYETUNJI, A.

    2010-03-01

    Full Text Available Development of small injection moulding machine for forming small plastic articles in small-scale industries was studied. This work which entailed design, construction and test small injection moulding machine that was capable of forming small plastic articles by injecting molten resins into a closed, cooled mould, where it solidifies to give the desired products was developed. The machine was designed and constructed to work as a prototype for producing very small plastic components. Design concept, operation, and assembly of components parts were made. Also, working drawings and materials selection were made based on calculations of the diameter of injection plunger, number of teeth required for the plunger rack and spur gear, the angular velocity, number of revolution, torque and power obtained from the electric motor selected and the leverage on the handle of the machine. The machine parts/components were then assembled in line with the designed made, thereafter the constructed machine was tested using high density polyethylene and master batch. The results obtained from the test were satisfactory.

  10. Dynamic Modeling and Analysis of the Large-Scale Rotary Machine with Multi-Supporting

    Directory of Open Access Journals (Sweden)

    Xuejun Li

    2011-01-01

    Full Text Available The large-scale rotary machine with multi-supporting, such as rotary kiln and rope laying machine, is the key equipment in the architectural, chemistry, and agriculture industries. The body, rollers, wheels, and bearings constitute a chain multibody system. Axis line deflection is a vital parameter to determine mechanics state of rotary machine, thus body axial vibration needs to be studied for dynamic monitoring and adjusting of rotary machine. By using the Riccati transfer matrix method, the body system of rotary machine is divided into many subsystems composed of three elements, namely, rigid disk, elastic shaft, and linear spring. Multiple wheel-bearing structures are simplified as springs. The transfer matrices of the body system and overall transfer equation are developed, as well as the response overall motion equation. Taken a rotary kiln as an instance, natural frequencies, modal shape, and response vibration with certain exciting axis line deflection are obtained by numerical computing. The body vibration modal curves illustrate the cause of dynamical errors in the common axis line measurement methods. The displacement response can be used for further measurement dynamical error analysis and compensation. The response overall motion equation could be applied to predict the body motion under abnormal mechanics condition, and provide theory guidance for machine failure diagnosis.

  11. Costing Generated Runtime Execution Plans for Large-Scale Machine Learning Programs

    OpenAIRE

    Boehm, Matthias

    2015-01-01

    Declarative large-scale machine learning (ML) aims at the specification of ML algorithms in a high-level language and automatic generation of hybrid runtime execution plans ranging from single node, in-memory computations to distributed computations on MapReduce (MR) or similar frameworks like Spark. The compilation of large-scale ML programs exhibits many opportunities for automatic optimization. Advanced cost-based optimization techniques require---as a fundamental precondition---an accurat...

  12. Finite-size scaling a collection of reprints

    CERN Document Server

    1988-01-01

    Over the past few years, finite-size scaling has become an increasingly important tool in studies of critical systems. This is partly due to an increased understanding of finite-size effects by analytical means, and partly due to our ability to treat larger systems with large computers. The aim of this volume was to collect those papers which have been important for this progress and which illustrate novel applications of the method. The emphasis has been placed on relatively recent developments, including the use of the &egr;-expansion and of conformal methods.

  13. Finite-size scaling approach to dynamic storage allocation problem

    Science.gov (United States)

    Seyed-allaei, Hamed

    2003-09-01

    It is demonstrated how dynamic storage allocation algorithms can be analyzed in terms of finite-size scaling. The method is illustrated in the three simple cases of the first-fit, next-fit and best-fit algorithms, and the system works at full capacity. The analysis is done from two different points of view-running speed and employed memory. In both cases, and for all algorithms, it is shown that a simple scaling function exists and the relevant exponents are calculated. The method can be applied on similar problems as well.

  14. Size scaling of negative hydrogen ion sources for fusion

    Science.gov (United States)

    Fantz, U.; Franzen, P.; Kraus, W.; Schiesko, L.; Wimmer, C.; Wünderlich, D.

    2015-04-01

    The RF-driven negative hydrogen ion source (H-, D-) for the international fusion experiment ITER has a width of 0.9 m and a height of 1.9 m and is based on a ⅛ scale prototype source being in operation at the IPP test facilities BATMAN and MANITU for many years. Among the challenges to meet the required parameters in a caesiated source at a source pressure of 0.3 Pa or less is the challenge in size scaling of a factor of eight. As an intermediate step a ½ scale ITER source went into operation at the IPP test facility ELISE with the first plasma in February 2013. The experience and results gained so far at ELISE allowed a size scaling study from the prototype source towards the ITER relevant size at ELISE, in which operational issues, physical aspects and the source performance is addressed, highlighting differences as well as similarities. The most ITER relevant results are: low pressure operation down to 0.2 Pa is possible without problems; the magnetic filter field created by a current in the plasma grid is sufficient to reduce the electron temperature below the target value of 1 eV and to reduce together with the bias applied between the differently shaped bias plate and the plasma grid the amount of co-extracted electrons. An asymmetry of the co-extracted electron currents in the two grid segments is measured, varying strongly with filter field and bias. Contrary to the prototype source, a dedicated plasma drift in vertical direction is not observed. As in the prototype source, the performance in deuterium is limited by the amount of co-extracted electrons in short as well as in long pulse operation. Caesium conditioning is much harder in deuterium than in hydrogen for which fast and reproducible conditioning is achieved. First estimates reveal a caesium consumption comparable to the one in the prototype source despite the large size.

  15. Finite-Size Scaling in Random K-SAT Problems

    Science.gov (United States)

    Ha, Meesoon; Lee, Sang Hoon; Jeon, Chanil; Jeong, Hawoong

    2010-03-01

    We propose a comprehensive view of threshold behaviors in random K-satisfiability (K-SAT) problems, in the context of the finite-size scaling (FSS) concept of nonequilibrium absorbing phase transitions using the average SAT (ASAT) algorithm. In particular, we focus on the value of the FSS exponent to characterize the SAT/UNSAT phase transition, which is still debatable. We also discuss the role of the noise (temperature-like) parameter in stochastic local heuristic search algorithms.

  16. Ion beam machining error control and correction for small scale optics.

    Science.gov (United States)

    Xie, Xuhui; Zhou, Lin; Dai, Yifan; Li, Shengyi

    2011-09-20

    Ion beam figuring (IBF) technology for small scale optical components is discussed. Since the small removal function can be obtained in IBF, it makes computer-controlled optical surfacing technology possible to machine precision centimeter- or millimeter-scale optical components deterministically. Using a small ion beam to machine small optical components, there are some key problems, such as small ion beam positioning on the optical surface, material removal rate, ion beam scanning pitch control on the optical surface, and so on, that must be seriously considered. The main reasons for the problems are that it is more sensitive to the above problems than a big ion beam because of its small beam diameter and lower material ratio. In this paper, we discuss these problems and their influences in machining small optical components in detail. Based on the identification-compensation principle, an iterative machining compensation method is deduced for correcting the positioning error of an ion beam with the material removal rate estimated by a selected optimal scanning pitch. Experiments on ϕ10 mm Zerodur planar and spherical samples are made, and the final surface errors are both smaller than λ/100 measured by a Zygo GPI interferometer.

  17. The Scaling of Human Interactions with City Size

    CERN Document Server

    Schläpfer, Markus; Raschke, Mathias; Claxton, Rob; Smoreda, Zbigniew; West, Geoffrey B; Ratti, Carlo

    2012-01-01

    The pace of life accelerates with city size, manifested in a per capita increase of almost all socioeconomic rates such as GDP, wages, violent crime or the transmission of certain contagious diseases. Here, we show that the structure and dynamics of the underlying network of human interactions provides a possible unifying mechanism for the origin of these pervasive regularities. By analyzing billions of anonymized call records from two European countries we find that human social interactions follow a superlinear scale-invariant relationship with city population size. This systematic acceleration of the interaction intensity takes place within specific constraints of social grouping. Together, these results provide a general microscopic basis for a deeper understanding of cities as co-located social networks in space and time, and of the emergent urban socioeconomic processes that characterize complex human societies.

  18. The stealthy nano-machine behind mast cell granule size distribution.

    Science.gov (United States)

    Hammel, Ilan; Meilijson, Isaac

    2015-01-01

    The classical model of mast cell secretory granule formation suggests that newly synthesized secretory mediators, transported from the rough endoplasmic reticulum to the Golgi complex, undergo post-transitional modification and are packaged for secretion by condensation within membrane-bound granules of unit size. These unit granules may fuse with other granules to form larger granules that reside in the cytoplasm until secreted. A novel stochastic model for mast cell granule growth and elimination (G&E) as well as inventory management is presented. Resorting to a statistical mechanics approach in which SNAP (Soluble NSF Attachment Protein) REceptor (SNARE) components are viewed as interacting particles, the G&E model provides a simple 'nano-machine' of SNARE self-aggregation that can perform granule growth and secretion. Granule stock is maintained as a buffer to meet uncertainty in demand by the extracellular environment and to serve as source of supply during the lead time to produce granules of adaptive content. Experimental work, mathematical calculations, statistical modeling and a rationale for the emergence of nearly last-in, first out inventory management, are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Scale effects on the variability of the raindrop size distribution

    Science.gov (United States)

    Raupach, Timothy; Berne, Alexis

    2016-04-01

    The raindrop size distribution (DSD) is of utmost important to the study of rainfall processes and microphysics. All important rainfall variables can be calculated as weighted moments of the DSD. Qualitative precipitation estimation (QPE) algorithms and numerical weather prediction (NWP) models both use the DSD in order to calculate quantities such as the rain rate. Often these quantities are calculated at a pixel scale: radar reflectivities, for example, are integrated over a volume, so a DSD for the volume must be calculated or assumed. We present results of a study in which we have investigated the change of support problem with respect to the DSD. We have attempted to answer the following two questions. First, if a DSD measured at point scale is used to represent an area, how much error does this introduce? Second, how representative are areal DSDs calculated by QPE and NWP algorithms of the microphysical process happening inside the pixel of interest? We simulated fields of DSDs at two representative spatial resolutions: at the 2.1x2.1 km2 resolution of a typical NWP pixel, and at the 5x5 km2 resolution of a Global Precipitation Mission (GPM) satellite-based weather radar pixel. The simulation technique uses disdrometer network data and geostatistics to simulate the non-parametric DSD at 100x100 m2 resolution, conditioned by the measured DSD values. From these simulations, areal DSD measurements were derived and compared to point measurements of the DSD. The results show that the assumption that a point represents an area introduces error that increases with areal size and drop size and decreases with integration time. Further, the results show that current areal DSD estimation algorithms are not always representative of sub-grid DSDs. Idealised simulations of areal DSDs produced representative values for rain rate and radar reflectivity, but estimations of drop concentration and characteristic drop size that were often outside the sub-grid value ranges.

  20. Size scaling of negative hydrogen ion sources for fusion

    Energy Technology Data Exchange (ETDEWEB)

    Fantz, U., E-mail: ursel.fantz@ipp.mpg.de; Franzen, P.; Kraus, W.; Schiesko, L.; Wimmer, C.; Wünderlich, D. [Max-Planck-Institut für Plasmaphysik, Boltzmannstr. 2, 85748 Garching (Germany)

    2015-04-08

    The RF-driven negative hydrogen ion source (H{sup −}, D{sup −}) for the international fusion experiment ITER has a width of 0.9 m and a height of 1.9 m and is based on a ⅛ scale prototype source being in operation at the IPP test facilities BATMAN and MANITU for many years. Among the challenges to meet the required parameters in a caesiated source at a source pressure of 0.3 Pa or less is the challenge in size scaling of a factor of eight. As an intermediate step a ½ scale ITER source went into operation at the IPP test facility ELISE with the first plasma in February 2013. The experience and results gained so far at ELISE allowed a size scaling study from the prototype source towards the ITER relevant size at ELISE, in which operational issues, physical aspects and the source performance is addressed, highlighting differences as well as similarities. The most ITER relevant results are: low pressure operation down to 0.2 Pa is possible without problems; the magnetic filter field created by a current in the plasma grid is sufficient to reduce the electron temperature below the target value of 1 eV and to reduce together with the bias applied between the differently shaped bias plate and the plasma grid the amount of co-extracted electrons. An asymmetry of the co-extracted electron currents in the two grid segments is measured, varying strongly with filter field and bias. Contrary to the prototype source, a dedicated plasma drift in vertical direction is not observed. As in the prototype source, the performance in deuterium is limited by the amount of co-extracted electrons in short as well as in long pulse operation. Caesium conditioning is much harder in deuterium than in hydrogen for which fast and reproducible conditioning is achieved. First estimates reveal a caesium consumption comparable to the one in the prototype source despite the large size.

  1. Source size scaling of fragment production in projectile breakup

    CERN Document Server

    Beaulieu, L; Fox, D; Das-Gupta, S; Pan, J; Ball, G C; Djerroud, B; Doré, D; Galindo-Uribarri, A; Guinet, D; Hagberg, E; Horn, D; Laforest, R; Larochelle, Y; Lautesse, P; Samri, M; Roy, R; Saint-Pierre, C

    1996-01-01

    Fragment production has been studied as a function of the source mass and excitation energy in peripheral collisions of $^{35}$Cl+$^{197}$Au at 43 MeV/nucleon and $^{70}$Ge+$^{nat}$Ti at 35 MeV/nucleon. The results are compared to the Au+Au data at 600 MeV/nucleon obtained by the ALADIN collaboration. A mass scaling, by $A_{source} \\sim$ 35 to 190, strongly correlated to excitation energy per nucleon, is presented, suggesting a thermal fragment production mechanism. Comparisons to a standard sequential decay model and the lattice-gas model are made. Fragment emission from a hot, rotating source is unable to reproduce the experimental source size scaling.

  2. On queue-size scaling for input-queued switches

    Directory of Open Access Journals (Sweden)

    Devavrat Shah

    2016-11-01

    Full Text Available We study the optimal scaling of the expected total queue size in an n×n input-queued switch, as a function of the number of ports n and the load factor ρ, which has been conjectured to be Θ(n/(1−ρ (cf. [15]. In a recent work [16], the validity of this conjecture has been established for the regime where 1−ρ=O(1/n2. In this paper, we make further progress in the direction of this conjecture. We provide a new class of scheduling policies under which the expected total queue size scales as O(n1.5(1−ρ−1log(1/(1−ρ when 1−ρ=O(1/n. This is an improvement over the state of the art; for example, for ρ=1−1/n the best known bound was O(n3, while ours is O(n2.5logn.

  3. Universal scaling of grain size distributions during dislocation creep

    Science.gov (United States)

    Aupart, Claire; Dunkel, Kristina G.; Angheluta, Luiza; Austrheim, Håkon; Ildefonse, Benoît; Malthe-Sørenssen, Anders; Jamtveit, Bjørn

    2017-04-01

    Grain size distributions are major sources of information about the mechanisms involved in ductile deformation processes and are often used as paleopiezometers (stress gauges). Several factors have been claimed to influence the stress vs grain size relation, including the water content (Jung & Karato 2001), the temperature (De Bresser et al., 2001), the crystal orientation (Linckens et al., 2016), the presence of second phase particles (Doherty et al. 1997; Cross et al., 2015), and heterogeneous stress distributions (Platt & Behr 2011). However, most of the studies of paleopiezometers have been done in the laboratory under conditions different from those in natural systems. It is therefore essential to complement these studies with observations of naturally deformed rocks. We have measured olivine grain sizes in ultramafic rocks from the Leka ophiolite in Norway and from Alpine Corsica using electron backscatter diffraction (EBSD) data, and calculated the corresponding probability density functions. We compared our results with samples from other studies and localities that have formed under a wide range of stress and strain rate conditions. All distributions collapse onto one universal curve in a log-log diagram where grain sizes are normalized by the mean grain size of each sample. The curve is composed of two straight segments with distinct slopes for grains above and below the mean grain size. These observations indicate that a surprisingly simple and universal power-law scaling describes the grain size distribution in ultramafic rocks during dislocation creep irrespective of stress levels and strain rates. Cross, Andrew J., Susan Ellis, and David J. Prior. 2015. « A Phenomenological Numerical Approach for Investigating Grain Size Evolution in Ductiley Deforming Rocks ». Journal of Structural Geology 76 (juillet): 22-34. doi:10.1016/j.jsg.2015.04.001. De Bresser, J. H. P., J. H. Ter Heege, and C. J. Spiers. 2001. « Grain Size Reduction by Dynamic

  4. A generic trust framework for large-scale open systems using machine learning

    CERN Document Server

    Liu, Xin; Datta, Anwitaman

    2011-01-01

    In many large scale distributed systems and on the web, agents need to interact with other unknown agents to carry out some tasks or transactions. The ability to reason about and assess the potential risks in carrying out such transactions is essential for providing a safe and reliable environment. A traditional approach to reason about the trustworthiness of a transaction is to determine the trustworthiness of the specific agent involved, derived from the history of its behavior. As a departure from such traditional trust models, we propose a generic, machine learning approach based trust framework where an agent uses its own previous transactions (with other agents) to build a knowledge base, and utilize this to assess the trustworthiness of a transaction based on associated features, which are capable of distinguishing successful transactions from unsuccessful ones. These features are harnessed using appropriate machine learning algorithms to extract relationships between the potential transaction and prev...

  5. Finite-size scaling of heavy-light mesons

    CERN Document Server

    Bernardoni, Fabio; Necco, Silvia

    2009-01-01

    We study the finite-size scaling of heavy-light mesons in the static limit. The most relevant effects are due to the pseudo-Goldstone boson cloud. In the HMChPT framework we compute two-point functions of left current densitities as well as pseudoscalar densitites for the cases in which some or all of them lay in the epsilon-regime. As expected, finite volume dependence turns out to be significant in this regime and can be predicted in the effective theory in terms of the infinite-volume low-energy couplings. These results might be relevant for extraction of heavy-light meson properties from lattice simulations.

  6. Some cases of machining large-scale parts: Characterization and modelling of heavy turning, deep drilling and broaching

    Science.gov (United States)

    Haddag, B.; Nouari, M.; Moufki, A.

    2016-10-01

    Machining large-scale parts involves extreme loading at the cutting zone. This paper presents an overview of some cases of machining large-scale parts: heavy turning, deep drilling and broaching processes. It focuses on experimental characterization and modelling methods of these processes. Observed phenomena and/or measured cutting forces are reported. The paper also discusses the predictive ability of the proposed models to reproduce experimental data.

  7. Scale Factor Determination of Micro-Machined Angular Rate Sensors Without a Turntable

    Institute of Scientific and Technical Information of China (English)

    Gaisser Alexander; GAO Zhongyu; ZHOU Bin; ZHANG Rong; CHEN Zhiyong

    2006-01-01

    This paper presents a digital readout system to detect small capacitive signals of a micro-machined angular rate sensor. The flexible parameter adjustment ability and the computation speed of the digital signal processor were used to develop a new calibration procedure to determine the scale factor of a gyroscope without a turntable. The force of gravity was used to deflect the movable masses in the sensor, which resulted in a corresponding angular rate input. The gyroscope scale factor was then measured without a turntable. Test results show a maximum deviation of about 1.2% with respect to the scale factor determined on a turntable with the accuracy independent of the manufacturing process and property variations. The calibration method in combination with the improved readout electronics can minimize the calibration procedure and, thus, reduce the manufacturing costs.

  8. Quantifying the Relationship Between Drainage Networks at Hillslope Scale and Particle Size Distribution at Pedon Scale

    Science.gov (United States)

    Cámara, Joaquín; Martín, Miguel Ángel; Gómez-Miguel, Vicente

    2015-02-01

    Nowadays, translating information about hydrologic and soil properties and processes across scales has emerged as a major theme in soil science and hydrology, and suitable theories for upscaling or downscaling hydrologic and soil information are being looked forward. The recognition of low-order catchments as self-organized systems suggests the existence of a great amount of links at different scales between their elements. The objective of this work was to research in areas of homogeneous bedrock material, the relationship between the hierarchical structure of the drainage networks at hillslope scale and the heterogeneity of the particle-size distribution at pedon scale. One of the most innovative elements in this work is the choice of the parameters to quantify the organization level of the studied features. The fractal dimension has been selected to measure the hierarchical structure of the drainage networks, while the Balanced Entropy Index (BEI) has been the chosen parameter to quantify the heterogeneity of the particle-size distribution from textural data. These parameters have made it possible to establish quantifiable relationships between two features attached to different steps in the scale range. Results suggest that the bedrock lithology of the landscape constrains the architecture of the drainage networks developed on it and the particle soil distribution resulting in the fragmentation processes.

  9. Machine learning for the identification of scaling laws and dynamical systems directly from data in fusion

    Energy Technology Data Exchange (ETDEWEB)

    Murari, A., E-mail: andrea.murari@igi.cnr.i [Consorzio RFX-Associazione EURATOM ENEA per la Fusione, I-35127 Padova (Italy); Vega, J. [Asociacion EURATOM-CIEMAT para Fusion, CIEMAT, Madrid (Spain); Mazon, D. [Association EURATOM-CEA, CEA Cadarache, 13108 Saint-Paul-lez-Durance (France); Patane, D.; Vagliasindi, G.; Arena, P. [Dipartimento di Ingegneria Elettrica Elettronica e dei Sistemi-Universita degli Studi di Catania, 95125 Catania (Italy); Martin, N.; Martin, N.F. [Arts et Metiers Paris Tech Engineering College (ENSAM) 13100 Aix-en-Provence (France); Ratta, G. [Asociacion EURATOM-CIEMAT para Fusion, CIEMAT, Madrid (Spain); Caloone, V. [Arts et Metiers Paris Tech Engineering College (ENSAM) 13100 Aix-en-Provence (France)

    2010-11-11

    Original methods to extract equations directly from experimental signals are presented. These techniques have been applied first to the determination of scaling laws for the threshold between the L and H mode of confinement in Tokamaks. The required equations can be extracted from the weights of neural networks and the separating hyperplane of Support Vector Machines. More powerful tools are required for the identification of differential equations directly from the time series of the signals. To this end, recurrent neural networks have proved to be very effective to properly identify ordinary differential equations and have been applied to the coupling between sawteeth and ELMs.

  10. Multi products single machine economic production quantity model with multiple batch size

    Directory of Open Access Journals (Sweden)

    Ata Allah Taleizadeh

    2011-04-01

    Full Text Available In this paper, a multi products single machine economic order quantity model with discrete delivery is developed. A unique cycle length is considered for all produced items with an assumption that all products are manufactured on a single machine with a limited capacity. The proposed model considers different items such as production, setup, holding, and transportation costs. The resulted model is formulated as a mixed integer nonlinear programming model. Harmony search algorithm, extended cutting plane and particle swarm optimization methods are used to solve the proposed model. Two numerical examples are used to analyze and to evaluate the performance of the proposed model.

  11. Influence of particle size on Cutting Forces and Surface Roughness in Machining of B4Cp - 6061 Aluminium Matrix Composites

    Science.gov (United States)

    Hiremath, Vijaykumar; Badiger, Pradeep; Auradi, V.; Dundur, S. T.; Kori, S. A.

    2016-02-01

    Amongst advanced materials, metal matrix composites (MMC) are gaining importance as materials for structural applications in particular, particulate reinforced aluminium MMCs have received considerable attention due to their superior properties such as high strength to weight ratio, excellent low-temperature performance, high wear resistance, high thermal conductivity. The present study aims at studying and comparing the machinability aspects of B4Cp reinforced 6061Al alloy metal matrix composites reinforced with 37μm and 88μm particulates produced by stir casting method. The micro structural characterization of the prepared composites is done using Scanning Electron Microscopy equipped with EDX analysis (Hitachi Su-1500 model) to identify morphology and distribution of B4C particles in the 6061Al matrix. The specimens are turned on a conventional lathe machine using a Polly crystalline Diamond (PCD) tool to study the effect of particle size on the cutting forces and the surface roughness under varying machinability parameters viz., Cutting speed (29-45 m/min.), Feed rate (0.11-0.33 mm/rev.) and depth of cut (0.5-1mm). Results of micro structural characterization revealed fairly uniform distribution of B4C particles (in both cases i.e., 37μm and 88μm) in 6061Al matrix. The surface roughness of the composite is influenced by cutting speed. The feed rate and depth of cut have a negative influence on surface roughness. The cutting forces decreased with increase in cutting speed whereas cutting forces increased with increase in feed and depth of cut. Higher cutting forces are noticed while machining Al6061 base alloy compared to reinforced composites. Surface finish is high during turning of the 6061Al base alloy and surface roughness is high with 88μm size particle reinforced composites. As the particle size increases Surface roughness also increases.

  12. A Machine Learning Approach to Estimate Riverbank Geotechnical Parameters from Sediment Particle Size Data

    Science.gov (United States)

    Iwashita, Fabio; Brooks, Andrew; Spencer, John; Borombovits, Daniel; Curwen, Graeme; Olley, Jon

    2015-04-01

    Assessing bank stability using geotechnical models traditionally involves the laborious collection of data on the bank and floodplain stratigraphy, as well as in-situ geotechnical data for each sedimentary unit within a river bank. The application of geotechnical bank stability models are limited to those sites where extensive field data has been collected, where their ability to provide predictions of bank erosion at the reach scale are limited without a very extensive and expensive field data collection program. Some challenges in the construction and application of riverbank erosion and hydraulic numerical models are their one-dimensionality, steady-state requirements, lack of calibration data, and nonuniqueness. Also, numerical models commonly can be too rigid with respect to detecting unexpected features like the onset of trends, non-linear relations, or patterns restricted to sub-samples of a data set. These shortcomings create the need for an alternate modelling approach capable of using available data. The application of the Self-Organizing Maps (SOM) approach is well-suited to the analysis of noisy, sparse, nonlinear, multidimensional, and scale-dependent data. It is a type of unsupervised artificial neural network with hybrid competitive-cooperative learning. In this work we present a method that uses a database of geotechnical data collected at over 100 sites throughout Queensland State, Australia, to develop a modelling approach that enables geotechnical parameters (soil effective cohesion, friction angle, soil erodibility and critical stress) to be derived from sediment particle size data (PSD). The model framework and predicted values were evaluated using two methods, splitting the dataset into training and validation set, and through a Bootstrap approach. The basis of Bootstrap cross-validation is a leave-one-out strategy. This requires leaving one data value out of the training set while creating a new SOM to estimate that missing value based on the

  13. Finite-Size Scaling Effects in Chromia thin films

    Science.gov (United States)

    Echtenkamp, Will; He, Xi; Binek, Christian

    2012-02-01

    Controlling magnetism by electrical means remains a key challenge in the area of spintronics. The use of magnetoelectrically active materials is one of the most promising approaches to this problem. Utilizing Cr2O3 as the magnetoelectric pinning layer in a magnetic heterostructure both temperature assisted and isothermal electrical control of exchange bias have been achieved [1,2]. Interestingly, this ME switching of exchange bias has only been achieved using bulk Cr2O3 crystals, isothermal switching of exchange bias using thin film chromia remains elusive. We investigate the origin of unusually pronounced finite-size scaling effects on the properties of Cr2O3 grown by Molecular Beam Epitaxy; in particular we focus on the different temperature dependencies of the magnetic susceptibility of bulk vs. thin film chromia, the change in Nèel temperatures, and the implications for the magneto electric properties of chromia thin films. [4pt] [1] P. Borisov et al., Phys. Rev. Lett. 94, 117203 (2005).[0pt] [2] X. He et al., Nature Mater. 9, 579 (2010).

  14. Polynomial Transfer Lot Sizing Techniques for Batch Processing on Consecutive Machines

    Science.gov (United States)

    1989-09-01

    batch, while still specifying sizable batches? Goldratt , the developer of OPT (Optimized Production Technology) [7; 12, pp. 692-715; 101, answered this...and Jeffrey L Rummel, Batching to Minimize Flow Times on One Machine, Management Science, 33, #6, 1987, pp. 784-799. [71 Goldratt , Eliyahu and Robert

  15. A divide-and-combine method for large scale nonparallel support vector machines.

    Science.gov (United States)

    Tian, Yingjie; Ju, Xuchan; Shi, Yong

    2016-03-01

    Nonparallel Support Vector Machine (NPSVM) which is more flexible and has better generalization than typical SVM is widely used for classification. Although some methods and toolboxes like SMO and libsvm for NPSVM are used, NPSVM is hard to scale up when facing millions of samples. In this paper, we propose a divide-and-combine method for large scale nonparallel support vector machine (DCNPSVM). In the division step, DCNPSVM divide samples into smaller sub-samples aiming at solving smaller subproblems independently. We theoretically and experimentally prove that the objective function value, solutions, and support vectors solved by DCNPSVM are close to the objective function value, solutions, and support vectors of the whole NPSVM problem. In the combination step, the sub-solutions combined as initial iteration points are used to solve the whole problem by global coordinate descent which converges quickly. In order to balance the accuracy and efficiency, we adopt a multi-level structure which outperforms state-of-the-art methods. Moreover, our DCNPSVM can tackle unbalance problems efficiently by tuning the parameters. Experimental results on lots of large data sets show the effectiveness of our method in memory usage, classification accuracy and time consuming.

  16. Size dependent rupture growth at the scale of real earthquake

    Science.gov (United States)

    Colombelli, Simona; Festa, Gaetano; Zollo, Aldo

    2017-04-01

    When an earthquake starts, the rupture process may evolve in a variety of ways, resulting in the occurrence of different magnitude earthquakes, with variable areal extent and slip, and this may produce an unpredictable damage distribution around the fault zone. The cause of the observed diversity of the rupture process evolution is unknown. There are studies supporting the idea that all earthquakes arise in the same way, while the mechanical conditions of the fault zone may determine the propagation and generation of small or large earthquakes. Other studies show that small and large earthquakes are different from the initial stage of the rupture beginning. Among them, Colombelli et al. (2014) observed that the initial slope of the P-wave peak displacement could be a discriminant for the final earthquake size, so that small and large ruptures show a different behavior in their initial stage. In this work we perform a detailed analysis of the time evolution of the P-wave peak amplitude for a set of few, co-located events, during the 2008, Iwate-Miyagi (Japan) earthquake sequence. The events have magnitude between 3.2 and 7.2 and their epicentral coordinates vary in a narrow range, with a maximum distance among the epicenters of about 15 km. After applying a refined technique for data processing, we measured the initial Peak Displacement (Pd) as the absolute value of the vertical component of displacement records, starting from the P-wave arrival time and progressively expanding the time window. For each event, we corrected the observed Pd values at different stations for the distance effect and computed the average logarithm of Pd as a function of time. The overall shape of the Pd curves (in log-lin scale) is consistent with what has been previously observed for a larger dataset by Colombelli et al. (2014). The initial amplitude begins with small values and then increases with time, until a plateau level is reached. However, we observed essential differences in the

  17. Minimum sample size requirements for Mokken scale analysis

    NARCIS (Netherlands)

    Straat, J.H.; van der Ark, L.A.; Sijtsma, K.

    2014-01-01

    An automated item selection procedure in Mokken scale analysis partitions a set of items into one or more Mokken scales, if the data allow. Two algorithms are available that pursue the same goal of selecting Mokken scales of maximum length: Mokken’s original automated item selection procedure (AISP)

  18. Large-scale machine learning and evaluation platform for real-time traffic surveillance

    Science.gov (United States)

    Eichel, Justin A.; Mishra, Akshaya; Miller, Nicholas; Jankovic, Nicholas; Thomas, Mohan A.; Abbott, Tyler; Swanson, Douglas; Keller, Joel

    2016-09-01

    In traffic engineering, vehicle detectors are trained on limited datasets, resulting in poor accuracy when deployed in real-world surveillance applications. Annotating large-scale high-quality datasets is challenging. Typically, these datasets have limited diversity; they do not reflect the real-world operating environment. There is a need for a large-scale, cloud-based positive and negative mining process and a large-scale learning and evaluation system for the application of automatic traffic measurements and classification. The proposed positive and negative mining process addresses the quality of crowd sourced ground truth data through machine learning review and human feedback mechanisms. The proposed learning and evaluation system uses a distributed cloud computing framework to handle data-scaling issues associated with large numbers of samples and a high-dimensional feature space. The system is trained using AdaBoost on 1,000,000 Haar-like features extracted from 70,000 annotated video frames. The trained real-time vehicle detector achieves an accuracy of at least 95% for 1/2 and about 78% for 19/20 of the time when tested on ˜7,500,000 video frames. At the end of 2016, the dataset is expected to have over 1 billion annotated video frames.

  19. A Fast SVD-Hidden-nodes based Extreme Learning Machine for Large-Scale Data Analytics.

    Science.gov (United States)

    Deng, Wan-Yu; Bai, Zuo; Huang, Guang-Bin; Zheng, Qing-Hua

    2016-05-01

    Big dimensional data is a growing trend that is emerging in many real world contexts, extending from web mining, gene expression analysis, protein-protein interaction to high-frequency financial data. Nowadays, there is a growing consensus that the increasing dimensionality poses impeding effects on the performances of classifiers, which is termed as the "peaking phenomenon" in the field of machine intelligence. To address the issue, dimensionality reduction is commonly employed as a preprocessing step on the Big dimensional data before building the classifiers. In this paper, we propose an Extreme Learning Machine (ELM) approach for large-scale data analytic. In contrast to existing approaches, we embed hidden nodes that are designed using singular value decomposition (SVD) into the classical ELM. These SVD nodes in the hidden layer are shown to capture the underlying characteristics of the Big dimensional data well, exhibiting excellent generalization performances. The drawback of using SVD on the entire dataset, however, is the high computational complexity involved. To address this, a fast divide and conquer approximation scheme is introduced to maintain computational tractability on high volume data. The resultant algorithm proposed is labeled here as Fast Singular Value Decomposition-Hidden-nodes based Extreme Learning Machine or FSVD-H-ELM in short. In FSVD-H-ELM, instead of identifying the SVD hidden nodes directly from the entire dataset, SVD hidden nodes are derived from multiple random subsets of data sampled from the original dataset. Comprehensive experiments and comparisons are conducted to assess the FSVD-H-ELM against other state-of-the-art algorithms. The results obtained demonstrated the superior generalization performance and efficiency of the FSVD-H-ELM. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Supervised machine learning on a network scale: application to seismic event classification and detection

    Science.gov (United States)

    Reynen, Andrew; Audet, Pascal

    2017-09-01

    A new method using a machine learning technique is applied to event classification and detection at seismic networks. This method is applicable to a variety of network sizes and settings. The algorithm makes use of a small catalogue of known observations across the entire network. Two attributes, the polarization and frequency content, are used as input to regression. These attributes are extracted at predicted arrival times for P and S waves using only an approximate velocity model, as attributes are calculated over large time spans. This method of waveform characterization is shown to be able to distinguish between blasts and earthquakes with 99 per cent accuracy using a network of 13 stations located in Southern California. The combination of machine learning with generalized waveform features is further applied to event detection in Oklahoma, United States. The event detection algorithm makes use of a pair of unique seismic phases to locate events, with a precision directly related to the sampling rate of the generalized waveform features. Over a week of data from 30 stations in Oklahoma, United States are used to automatically detect 25 times more events than the catalogue of the local geological survey, with a false detection rate of less than 2 per cent. This method provides a highly confident way of detecting and locating events. Furthermore, a large number of seismic events can be automatically detected with low false alarm, allowing for a larger automatic event catalogue with a high degree of trust.

  1. CT crown for on-machine scale calibration in Computed Tomography

    DEFF Research Database (Denmark)

    Stolfi, Alessandro; De Chiffre, Leonardo

    2016-01-01

    A novel artefact for on-machine calibration of the scale in 3D X-ray Computed Tomography (CT) is presented. The artefact comprises an invar disc on which several reference ruby spheres are positioned at different heights using carbon fibre rods. The artefact is positioned and scanned together...... with the workpiece inside the CT scanner producing a 3D reference system for the measurement. The artefact allows a considerable reduction of time by compressing the workflow of calibration, scanning, measurement, and re-calibration. Furthermore, the method allows a considerable reduction of the amount of data...... generated from CT scanning. A prototype was calibrated on a tactile CMM and its applicability in CT scanning demonstrated using a calibrated workpiece....

  2. Spatial patterns of correlated scale size and scale color in relation to color pattern elements in butterfly wings.

    Science.gov (United States)

    Iwata, Masaki; Otaki, Joji M

    2016-02-01

    Complex butterfly wing color patterns are coordinated throughout a wing by unknown mechanisms that provide undifferentiated immature scale cells with positional information for scale color. Because there is a reasonable level of correspondence between the color pattern element and scale size at least in Junonia orithya and Junonia oenone, a single morphogenic signal may contain positional information for both color and size. However, this color-size relationship has not been demonstrated in other species of the family Nymphalidae. Here, we investigated the distribution patterns of scale size in relation to color pattern elements on the hindwings of the peacock pansy butterfly Junonia almana, together with other nymphalid butterflies, Vanessa indica and Danaus chrysippus. In these species, we observed a general decrease in scale size from the basal to the distal areas, although the size gradient was small in D. chrysippus. Scales of dark color in color pattern elements, including eyespot black rings, parafocal elements, and submarginal bands, were larger than those of their surroundings. Within an eyespot, the largest scales were found at the focal white area, although there were exceptional cases. Similarly, ectopic eyespots that were induced by physical damage on the J. almana background area had larger scales than in the surrounding area. These results are consistent with the previous finding that scale color and size coordinate to form color pattern elements. We propose a ploidy hypothesis to explain the color-size relationship in which the putative morphogenic signal induces the polyploidization (genome amplification) of immature scale cells and that the degrees of ploidy (gene dosage) determine scale color and scale size simultaneously in butterfly wings.

  3. Classification of Suicide Attempts through a Machine Learning Algorithm Based on Multiple Systemic Psychiatric Scales

    Directory of Open Access Journals (Sweden)

    Jihoon Oh

    2017-09-01

    Full Text Available Classification and prediction of suicide attempts in high-risk groups is important for preventing suicide. The purpose of this study was to investigate whether the information from multiple clinical scales has classification power for identifying actual suicide attempts. Patients with depression and anxiety disorders (N = 573 were included, and each participant completed 31 self-report psychiatric scales and questionnaires about their history of suicide attempts. We then trained an artificial neural network classifier with 41 variables (31 psychiatric scales and 10 sociodemographic elements and ranked the contribution of each variable for the classification of suicide attempts. To evaluate the clinical applicability of our model, we measured classification performance with top-ranked predictors. Our model had an overall accuracy of 93.7% in 1-month, 90.8% in 1-year, and 87.4% in lifetime suicide attempts detection. The area under the receiver operating characteristic curve (AUROC was the highest for 1-month suicide attempts detection (0.93, followed by lifetime (0.89, and 1-year detection (0.87. Among all variables, the Emotion Regulation Questionnaire had the highest contribution, and the positive and negative characteristics of the scales similarly contributed to classification performance. Performance on suicide attempts classification was largely maintained when we only used the top five ranked variables for training (AUROC; 1-month, 0.75, 1-year, 0.85, lifetime suicide attempts detection, 0.87. Our findings indicate that information from self-report clinical scales can be useful for the classification of suicide attempts. Based on the reliable performance of the top five predictors alone, this machine learning approach could help clinicians identify high-risk patients in clinical settings.

  4. Inverse size scaling of the nucleolus by a concentration-dependent phase transition.

    Science.gov (United States)

    Weber, Stephanie C; Brangwynne, Clifford P

    2015-03-01

    Just as organ size typically increases with body size, the size of intracellular structures changes as cells grow and divide. Indeed, many organelles, such as the nucleus [1, 2], mitochondria [3], mitotic spindle [4, 5], and centrosome [6], exhibit size scaling, a phenomenon in which organelle size depends linearly on cell size. However, the mechanisms of organelle size scaling remain unclear. Here, we show that the size of the nucleolus, a membraneless organelle important for cell-size homeostasis [7], is coupled to cell size by an intracellular phase transition. We find that nucleolar size directly scales with cell size in early C. elegans embryos. Surprisingly, however, when embryo size is altered, we observe inverse scaling: nucleolar size increases in small cells and decreases in large cells. We demonstrate that this seemingly contradictory result arises from maternal loading of a fixed number rather than a fixed concentration of nucleolar components, which condense into nucleoli only above a threshold concentration. Our results suggest that the physics of phase transitions can dictate whether an organelle assembles, and, if so, its size, providing a mechanistic link between organelle assembly and cell size. Since the nucleolus is known to play a key role in cell growth, this biophysical readout of cell size could provide a novel feedback mechanism for growth control.

  5. Ultrafast laser ablation and machining large-size structures on porcine bone.

    Science.gov (United States)

    An, Ran; Khadar, Ghadeer W; Wilk, Emilia I; Emigh, Brent; Haugen, Harold K; Wohl, Gregory R; Dunlop, Brett; Anvari, Mehran; Hayward, Joseph E; Fang, Qiyin

    2013-07-01

    When using ultrafast laser ablation in some orthopedic applications where precise cutting/drilling is required with minimal damage to collateral tissue, it is challenging to produce large-sized and deep holes using a tightly focused laser beam. The feasibility of producing deep, millimeter-size structures under different ablation strategies is investigated. X-ray computed microtomography was employed to analyze the morphology of these structures. Our results demonstrated the feasibility of producing holes with sizes required in clinical applications using concentric and helical ablation protocols.

  6. Energetics, scaling and sexual size dimorphism of spiders.

    Science.gov (United States)

    Grossi, B; Canals, M

    2015-03-01

    The extreme sexual size dimorphism in spiders has motivated studies for many years. In many species the male can be very small relative to the female. There are several hypotheses trying to explain this fact, most of them emphasizing the role of energy in determining spider size. The aim of this paper is to review the role of energy in sexual size dimorphism of spiders, even for those spiders that do not necessarily live in high foliage, using physical and allometric principles. Here we propose that the cost of transport or equivalently energy expenditure and the speed are traits under selection pressure in male spiders, favoring those of smaller size to reduce travel costs. The morphology of the spiders responds to these selective forces depending upon the lifestyle of the spiders. Climbing and bridging spiders must overcome the force of gravity. If bridging allows faster dispersal, small males would have a selective advantage by enjoying more mating opportunities. In wandering spiders with low population density and as a consequence few male-male interactions, high speed and low energy expenditure or cost of transport should be favored by natural selection. Pendulum mechanics show the advantages of long legs in spiders and their relationship with high speed, even in climbing and bridging spiders. Thus small size, compensated by long legs should be the expected morphology for a fast and mobile male spider.

  7. Avalanche size scaling in sheared three-dimensional amorphous solid

    DEFF Research Database (Denmark)

    Bailey, Nicholas; Schiøtz, Jakob; Lemaître, A.

    2007-01-01

    We study the statistics of plastic rearrangement events in a simulated amorphous solid at T=0. Events are characterized by the energy release and the "slip volume", the product of plastic strain and system volume. Their distributions for a given system size L appear to be exponential, but a chara......We study the statistics of plastic rearrangement events in a simulated amorphous solid at T=0. Events are characterized by the energy release and the "slip volume", the product of plastic strain and system volume. Their distributions for a given system size L appear to be exponential...

  8. Prediction of Solar Flare Size and Time-to-Flare Using Support Vector Machine Regression

    CERN Document Server

    Boucheron, Laura E; McAteer, R T James

    2015-01-01

    We study the prediction of solar flare size and time-to-flare using 38 features describing magnetic complexity of the photospheric magnetic field. This work uses support vector regression to formulate a mapping from the 38-dimensional feature space to a continuous-valued label vector representing flare size or time-to-flare. When we consider flaring regions only, we find an average error in estimating flare size of approximately half a \\emph{geostationary operational environmental satellite} (\\emph{GOES}) class. When we additionally consider non-flaring regions, we find an increased average error of approximately 3/4 a \\emph{GOES} class. We also consider thresholding the regressed flare size for the experiment containing both flaring and non-flaring regions and find a true positive rate of 0.69 and a true negative rate of 0.86 for flare prediction. The results for both of these size regression experiments are consistent across a wide range of predictive time windows, indicating that the magnetic complexity fe...

  9. Magnetic pattern at supergranulation scale: the Void Size Distribution

    CERN Document Server

    Berrilli, Francesco; Del Moro, Dario

    2014-01-01

    The large-scale magnetic pattern of the quiet sun is dominated by the magnetic network. This network, created by photospheric magnetic fields swept into convective downflows, delineates the boundaries of large scale cells of overturning plasma and exhibits voids in magnetic organization. Such voids include internetwork fields, a mixed-polarity sparse field that populate the inner part of network cells. To single out voids and to quantify their intrinsic pattern a fast circle packing based algorithm is applied to 511 SOHO/MDI high resolution magnetograms acquired during the outstanding solar activity minimum between 23 and 24 cycles. The computed Void Distribution Function shows a quasi-exponential decay behavior in the range 10-60 Mm. The lack of distinct flow scales in such a range corroborates the hypothesis of multi-scale motion flows at the solar surface. In addition to the quasi-exponential decay we have found that the voids reveal departure from a simple exponential decay around 35 Mm.

  10. Scaling beyond one rack and sizing of Hadoop platform

    OpenAIRE

    Litke, W.; Budka, Marcin

    2015-01-01

    This paper focuses on two aspects of configuration choices of the Hadoop platform. Firstly we are looking to establish performance implications of expanding an existing Hadoop cluster beyond a single rack. In the second part of the testing we are focusing on performance differences when deploying clusters of different sizes. The study also examines constraints of the disk latency found on the test cluster during our experiments and discusses their impact on the overall perfor- mance. All test...

  11. Sizing aspects of a small scale grid connected PV system

    Energy Technology Data Exchange (ETDEWEB)

    Bartha, S.; Teodoreanu, D.I.; Teodoreanu, M.; Negreanu, C. [I.C.P.E.-New Energy Sources Laboratory (NESL), Bucharest (Romania); Farkas, I.; Seres, I. [Szent Istvan University, Goedoelloe (Hungary). Department of Physics and Process Control

    2008-07-01

    Photovoltaics can be used in grid connected mode in two ways: as array installed at the end use site, such as on rooftops, or as utility-scale generating stations. The present paper describes a small-scale grid connected Photovoltaic system. The paper starts with the structure and characterization of the system. The principal technical parameter data are also presented. The used monitoring parameters indicate the principal meteorological data, air temperature and solar radiation data for the location sited at Agigea, at the Black Sea and the produced energy by the PV modules. The present application is made by 1 subsystem with 1200 Wp power and with the panel inclination possibility, using different type of PV modules. The paper presents a simulation model for this system realized with commercial software packages and with a one self made Matlab model that evaluates the energy balance of the PV system. All the simulation and measurements data are presented. (orig.)

  12. Influence of Varying Training Set Composition and Size on Support Vector Machine-Based Prediction of Active Compounds.

    Science.gov (United States)

    Rodríguez-Pérez, Raquel; Vogt, Martin; Bajorath, Jürgen

    2017-04-24

    Support vector machine (SVM) modeling is one of the most popular machine learning approaches in chemoinformatics and drug design. The influence of training set composition and size on predictions currently is an underinvestigated issue in SVM modeling. In this study, we have derived SVM classification and ranking models for a variety of compound activity classes under systematic variation of the number of positive and negative training examples. With increasing numbers of negative training compounds, SVM classification calculations became increasingly accurate and stable. However, this was only the case if a required threshold of positive training examples was also reached. In addition, consideration of class weights and optimization of cost factors substantially aided in balancing the calculations for increasing numbers of negative training examples. Taken together, the results of our analysis have practical implications for SVM learning and the prediction of active compounds. For all compound classes under study, top recall performance and independence of compound recall of training set composition was achieved when 250-500 active and 500-1000 randomly selected inactive training instances were used. However, as long as ∼50 known active compounds were available for training, increasing numbers of 500-1000 randomly selected negative training examples significantly improved model performance and gave very similar results for different training sets.

  13. Taking a Hands-On Approach: Apparent Grasping Ability Scales the Perception of Object Size

    Science.gov (United States)

    Linkenauger, Sally A.; Witt, Jessica K.; Proffitt, Dennis R.

    2011-01-01

    We examined whether the apparent size of an object is scaled to the morphology of the relevant body part with which one intends to act on it. To be specific, we tested if the visually perceived size of graspable objects is scaled to the extent of apparent grasping ability for the individual. Previous research has shown that right-handed…

  14. Unsupervised learning framework for large-scale flight data analysis of cockpit human machine interaction issues

    Science.gov (United States)

    Vaidya, Abhishek B.

    As the level of automation within an aircraft increases, the interactions between the pilot and autopilot play a crucial role in its proper operation. Issues with human machine interactions (HMI) have been cited as one of the main causes behind many aviation accidents. Due to the complexity of such interactions, it is challenging to identify all possible situations and develop the necessary contingencies. In this thesis, we propose a data-driven analysis tool to identify potential HMI issues in large-scale Flight Operational Quality Assurance (FOQA) dataset. The proposed tool is developed using a multi-level clustering framework, where a set of basic clustering techniques are combined with a consensus-based approach to group HMI events and create a data-driven model from the FOQA data. The proposed framework is able to effectively compress a large dataset into a small set of representative clusters within a data-driven model, enabling subject matter experts to effectively investigate identified potential HMI issues.

  15. Evaluating machine learning algorithms estimating tremor severity ratings on the Bain-Findley scale

    Science.gov (United States)

    Yohanandan, Shivanthan A. C.; Jones, Mary; Peppard, Richard; Tan, Joy L.; McDermott, Hugh J.; Perera, Thushara

    2016-12-01

    Tremor is a debilitating symptom of some movement disorders. Effective treatment, such as deep brain stimulation (DBS), is contingent upon frequent clinical assessments using instruments such as the Bain-Findley tremor rating scale (BTRS). Many patients, however, do not have access to frequent clinical assessments. Wearable devices have been developed to provide patients with access to frequent objective assessments outside the clinic via telemedicine. Nevertheless, the information they report is not in the form of BTRS ratings. One way to transform this information into BTRS ratings is through linear regression models (LRMs). Another, potentially more accurate method is through machine learning classifiers (MLCs). This study aims to compare MLCs and LRMs, and identify the most accurate model that can transform objective tremor information into tremor severity ratings on the BTRS. Nine participants with upper limb tremor had their DBS stimulation amplitude varied while they performed clinical upper-extremity exercises. Tremor features were acquired using the tremor biomechanics analysis laboratory (TREMBAL). Movement disorder specialists rated tremor severity on the BTRS from video recordings. Seven MLCs and 6 LRMs transformed TREMBAL features into tremor severity ratings on the BTRS using the specialists’ ratings as training data. The weighted Cohen’s kappa ({κ\\text{w}} ) defined the models’ rating accuracy. This study shows that the Random Forest MLC was the most accurate model ({κ\\text{w}}   =  0.81) at transforming tremor information into BTRS ratings, thereby improving the clinical interpretation of tremor information obtained from wearable devices.

  16. Identification of characteristic ELM evolution patterns with Alfven-scale measurements and unsupervised machine learning analysis

    Science.gov (United States)

    Smith, David R.; Fonck, R. J.; McKee, G. R.; Diallo, A.; Kaye, S. M.; Leblanc, B. P.; Sabbagh, S. A.

    2016-10-01

    Edge localized mode (ELM) saturation mechanisms, filament dynamics, and multi-mode interactions require nonlinear models, and validation of nonlinear ELM models requires fast, localized measurements on Alfven timescales. Recently, we investigated characteristic ELM evolution patterns with Alfven-scale measurements from the NSTX/NSTX-U beam emission spectroscopy (BES) system. We applied clustering algorithms from the machine learning domain to ELM time-series data. The algorithms identified two or three groups of ELM events with distinct evolution patterns. In addition, we found that the identified ELM groups correspond to distinct parameter regimes for plasma current, shape, magnetic balance, and density pedestal profile. The observed characteristic evolution patterns and corresponding parameter regimes suggest genuine variation in the underlying physical mechanisms that influence the evolution of ELM events and motivate nonlinear MHD simulations. Here, we review the previous results for characteristic ELM evolution patterns and parameter regimes, and we report on a new effort to explore the identified ELM groups with 2D BES measurements and nonlinear MHD simulations. Supported by U.S. Department of Energy Award Numbers DE-SC0001288 and DE-AC02-09CH11466.

  17. Digital Library ImageRetrieval usingScale Invariant Feature and Relevance Vector Machine

    Directory of Open Access Journals (Sweden)

    Hongtao Zhang

    2014-10-01

    Full Text Available With the advance of digital library, the digital content develops with rich information connotation. Traditional information retrieval methods based on external characteristic and text description are unable to sufficientlyreveal and express the substance and semantic relation of multimedia information, and unable to fully reveal and describe the representative characteristics of information. Because of the abundant connotation of image content and the people’s abstract subjectivity in studying image content, the visual feature of the image is difficult to be described by key words. Therefore, this method not always can meet people’s needs, and the study of digital library image retrieval technique based on content is important to both academic research and application. At present, image retrieval methods are mainly based on the text and content, etc. But these existing algorithms have shortages, such as large errors and slow speeds. Motivated by the above fact, we in this paper propose a new approach based on relevance vector machine (RVM. The proposed approach first extracts the patch-level scale invariant image feature (SIFT, and then constructs the global features for images. The image feature is then delivered into RVM for retrieval. We evaluate the proposed approach on Corel dataset. The experimental result shows that the proposed method in this text has high accuracy when retrieves images.

  18. The Large Scale Machine Learning in an Artificial Society: Prediction of the Ebola Outbreak in Beijing

    Directory of Open Access Journals (Sweden)

    Peng Zhang

    2015-01-01

    Full Text Available Ebola virus disease (EVD distinguishes its feature as high infectivity and mortality. Thus, it is urgent for governments to draw up emergency plans against Ebola. However, it is hard to predict the possible epidemic situations in practice. Luckily, in recent years, computational experiments based on artificial society appeared, providing a new approach to study the propagation of EVD and analyze the corresponding interventions. Therefore, the rationality of artificial society is the key to the accuracy and reliability of experiment results. Individuals’ behaviors along with travel mode directly affect the propagation among individuals. Firstly, artificial Beijing is reconstructed based on geodemographics and machine learning is involved to optimize individuals’ behaviors. Meanwhile, Ebola course model and propagation model are built, according to the parameters in West Africa. Subsequently, propagation mechanism of EVD is analyzed, epidemic scenario is predicted, and corresponding interventions are presented. Finally, by simulating the emergency responses of Chinese government, the conclusion is finally drawn that Ebola is impossible to outbreak in large scale in the city of Beijing.

  19. Finite Size Scaling and "perfect" actions the three dimensional Ising model

    CERN Document Server

    Ballesteros, H G; Martín-Mayor, V; Muñoz-Sudupe, A

    1998-01-01

    Using Finite-Size Scaling techniques, we numerically show that the first irrelevant operator of the lattice $\\lambda\\phi^4$ theory in three dimensions is (within errors) completely decoupled at $\\lambda=1.0$. This interesting result also holds in the Thermodynamical Limit, where the renormalized coupling constant shows an extraordinary reduction of the scaling-corrections when compared with the Ising model. It is argued that Finite-Size Scaling analysis can be a competitive method for finding improved actions.

  20. Scaling relationships among twig size, leaf size and leafing intensity in a successional series of subtropical forests.

    Science.gov (United States)

    Yan, En-Rong; Wang, Xi-Hua; Chang, Scott X; He, Fangliang

    2013-06-01

    Scaling relationships among twig size, leaf size and leafing intensity fundamentally influence the twig-leaf deployment pattern, a property that affects the architecture and functioning of plants. However, our understanding of how these relationships change within a species or between species as a function of forest succession is unclear. We determined log-log scaling relationships between twig cross-sectional area (twig size) and each of total and individual leaf area, and leafing intensity (the number of leaves per twig volume) for 78 woody species along a successional series in subtropical evergreen forests in eastern China. The series included four stages: secondary shrub (S1), young (S2), sub-climax (S3) and climax evergreen broadleaved forests (S4). The scaling slopes in each of the three relationships did not differ among the four stages. The y-intercept did not shift among the successional stages in the relationship between twig cross-sectional area and total leaf area; however, the y-intercept was greatest in S4, intermediate in S3 and lowest in S2 and S1 for the relationship between twig size and individual leaf area, while the opposite pattern was found for the twig size-leafing intensity relationship. This indicates that late successional trees have few but large leaves while early successional trees have more small leaves per unit twig size. For the relationship between twig cross-sectional area and total leaf area, there was no difference in the regression slope between recurrent (appear in more than one stages) and non-recurrent species (appear in only one stage) for each of the S1-S2, S2-S3 and S3-S4 pairs. A significant difference in the y-intercept was found in the S2-S3 pair only. In the relationship between twig cross-sectional area and individual leaf area, the regression slope between recurrent and non-recurrent species was homogeneous in the S1-S2 and S3-S4 pairs, but heterogeneous in the S2-S3 pair. We conclude that forest succession caused

  1. Gyrokinetic simulations of turbulent transport: size scaling and chaotic behaviour

    Science.gov (United States)

    Villard, L.; Bottino, A.; Brunner, S.; Casati, A.; Chowdhury, J.; Dannert, T.; Ganesh, R.; Garbet, X.; Görler, T.; Grandgirard, V.; Hatzky, R.; Idomura, Y.; Jenko, F.; Jolliet, S.; Khosh Aghdam, S.; Lapillonne, X.; Latu, G.; McMillan, B. F.; Merz, F.; Sarazin, Y.; Tran, T. M.; Vernay, T.

    2010-12-01

    Important steps towards the understanding of turbulent transport have been made with the development of the gyrokinetic framework for describing turbulence and with the emergence of numerical codes able to solve the set of gyrokinetic equations. This paper presents some of the main recent advances in gyrokinetic theory and computing of turbulence. Solving 5D gyrokinetic equations for each species requires state-of-the-art high performance computing techniques involving massively parallel computers and parallel scalable algorithms. The various numerical schemes that have been explored until now, Lagrangian, Eulerian and semi-Lagrangian, each have their advantages and drawbacks. A past controversy regarding the finite size effect (finite ρ*) in ITG turbulence has now been resolved. It has triggered an intensive benchmarking effort and careful examination of the convergence properties of the different numerical approaches. Now, both Eulerian and Lagrangian global codes are shown to agree and to converge to the flux-tube result in the ρ* → 0 limit. It is found, however, that an appropriate treatment of geometrical terms is necessary: inconsistent approximations that are sometimes used can lead to important discrepancies. Turbulent processes are characterized by a chaotic behaviour, often accompanied by bursts and avalanches. Performing ensemble averages of statistically independent simulations, starting from different initial conditions, is presented as a way to assess the intrinsic variability of turbulent fluxes and obtain reliable estimates of the standard deviation. Further developments concerning non-adiabatic electron dynamics around mode-rational surfaces and electromagnetic effects are discussed.

  2. A Comparison of Crater-Size Scaling and Ejection-Speed Scaling During Experimental Impacts in Sand

    Science.gov (United States)

    Anderson, J. L. B.; Cintala, M. J.; Johnson, M. K.

    2014-01-01

    Non-dimensional scaling relationships are used to understand various cratering processes including final crater sizes and the excavation of material from a growing crater. The principal assumption behind these scaling relationships is that these processes depend on a combination of the projectile's characteristics, namely its diameter, density, and impact speed. This simplifies the impact event into a single point-source. So long as the process of interest is beyond a few projectile radii from the impact point, the point-source assumption holds. These assumptions can be tested through laboratory experiments in which the initial conditions of the impact are controlled and resulting processes measured directly. In this contribution, we continue our exploration of the congruence between crater-size scaling and ejection-speed scaling relationships. In particular, we examine a series of experimental suites in which the projectile diameter and average grain size of the target are varied.

  3. Automatic event detection in low SNR microseismic signals based on multi-scale permutation entropy and a support vector machine

    Science.gov (United States)

    Jia, Rui-Sheng; Sun, Hong-Mei; Peng, Yan-Jun; Liang, Yong-Quan; Lu, Xin-Ming

    2016-12-01

    Microseismic monitoring is an effective means for providing early warning of rock or coal dynamical disasters, and its first step is microseismic event detection, although low SNR microseismic signals often cannot effectively be detected by routine methods. To solve this problem, this paper presents permutation entropy and a support vector machine to detect low SNR microseismic events. First, an extraction method of signal features based on multi-scale permutation entropy is proposed by studying the influence of the scale factor on the signal permutation entropy. Second, the detection model of low SNR microseismic events based on the least squares support vector machine is built by performing a multi-scale permutation entropy calculation for the collected vibration signals, constructing a feature vector set of signals. Finally, a comparative analysis of the microseismic events and noise signals in the experiment proves that the different characteristics of the two can be fully expressed by using multi-scale permutation entropy. The detection model of microseismic events combined with the support vector machine, which has the features of high classification accuracy and fast real-time algorithms, can meet the requirements of online, real-time extractions of microseismic events.

  4. Automatic event detection in low SNR microseismic signals based on multi-scale permutation entropy and a support vector machine

    Science.gov (United States)

    Jia, Rui-Sheng; Sun, Hong-Mei; Peng, Yan-Jun; Liang, Yong-Quan; Lu, Xin-Ming

    2017-07-01

    Microseismic monitoring is an effective means for providing early warning of rock or coal dynamical disasters, and its first step is microseismic event detection, although low SNR microseismic signals often cannot effectively be detected by routine methods. To solve this problem, this paper presents permutation entropy and a support vector machine to detect low SNR microseismic events. First, an extraction method of signal features based on multi-scale permutation entropy is proposed by studying the influence of the scale factor on the signal permutation entropy. Second, the detection model of low SNR microseismic events based on the least squares support vector machine is built by performing a multi-scale permutation entropy calculation for the collected vibration signals, constructing a feature vector set of signals. Finally, a comparative analysis of the microseismic events and noise signals in the experiment proves that the different characteristics of the two can be fully expressed by using multi-scale permutation entropy. The detection model of microseismic events combined with the support vector machine, which has the features of high classification accuracy and fast real-time algorithms, can meet the requirements of online, real-time extractions of microseismic events.

  5. On-line transient stability assessment of large-scale power systems by using ball vector machines

    Energy Technology Data Exchange (ETDEWEB)

    Mohammadi, M., E-mail: m.mohammadi@aut.ac.i [School of Electrical and Computer Engineering, Shiraz University, Shiraz (Iran, Islamic Republic of); Gharehpetian, G.B., E-mail: grptian@aut.ac.i [Electrical Engineering Department, Amirkabir University of Technology, Tehran (Iran, Islamic Republic of)

    2010-04-15

    In this paper ball vector machine (BVM) has been used for on-line transient stability assessment of large-scale power systems. To classify the system transient security status, a BVM has been trained for all contingencies. The proposed BVM based security assessment algorithm has very small training time and space in comparison with artificial neural networks (ANN), support vector machines (SVM) and other machine learning based algorithms. In addition, the proposed algorithm has less support vectors (SV) and therefore is faster than existing algorithms for on-line applications. One of the main points, to apply a machine learning method is feature selection. In this paper, a new Decision Tree (DT) based feature selection technique has been presented. The proposed BVM based algorithm has been applied to New England 39-bus power system. The simulation results show the effectiveness and the stability of the proposed method for on-line transient stability assessment procedure of large-scale power system. The proposed feature selection algorithm has been compared with different feature selection algorithms. The simulation results demonstrate the effectiveness of the proposed feature algorithm.

  6. Reynolds number scaling to predict droplet size distribution in dispersed and undispersed subsurface oil releases.

    Science.gov (United States)

    Li, Pu; Weng, Linlu; Niu, Haibo; Robinson, Brian; King, Thomas; Conmy, Robyn; Lee, Kenneth; Liu, Lei

    2016-12-15

    This study was aimed at testing the applicability of modified Weber number scaling with Alaska North Slope (ANS) crude oil, and developing a Reynolds number scaling approach for oil droplet size prediction for high viscosity oils. Dispersant to oil ratio and empirical coefficients were also quantified. Finally, a two-step Rosin-Rammler scheme was introduced for the determination of droplet size distribution. This new approach appeared more advantageous in avoiding the inconsistency in interfacial tension measurements, and consequently delivered concise droplet size prediction. Calculated and observed data correlated well based on Reynolds number scaling. The relation indicated that chemical dispersant played an important role in reducing the droplet size of ANS under different seasonal conditions. The proposed Reynolds number scaling and two-step Rosin-Rammler approaches provide a concise, reliable way to predict droplet size distribution, supporting decision making in chemical dispersant application during an offshore oil spill.

  7. 新型旋切式小型螺蛳切尾机%Design of the Small-Sized Machine of Resecting End of Snail

    Institute of Scientific and Technical Information of China (English)

    史建华

    2012-01-01

    一种适合家庭使用的小型螺蛳切尾机,采用具有榫卯结构的定料板固定螺蛳,通过螺蛳的固定、切尾、废物的收集等三道工序,利用旋切原理切去螺蛳的尾部.该螺蛳加工机结构简单,体积小,经济实惠,适合家庭使用,可单件和多件加工.%This paper introduced the small-sized machine of resecting end of snail which is suited to family-use,and it presented the key of design, principle of working and the composition of the small-sized machine. The machine employ convex-concave fixed panel to fix snail. Through the snail fixed, tail cutting, waste collection three processes, the machine uses rotate tool to amputate end of snail. The innovation of this design is that the machine has the advantages of simple structure, small volume, economic benefits. It is an ideal processing machine suitable for family use, single and multiple pieces of processing. [Ch, 4 fig. 12 ref.

  8. AN OPTIMIZATION-BASED HEURISTIC FOR A CAPACITATED LOT-SIZING MODEL IN AN AUTOMATED TELLER MACHINES NETWORK

    Directory of Open Access Journals (Sweden)

    Supatchaya Chotayakul

    2013-01-01

    Full Text Available This research studies a cash inventory problem in an ATM Network to satisfy customer’s cash needs over multiple periods with deterministic demand. The objective is to determine the amount of money to place in Automated Teller Machines (ATMs and cash centers for each period over a given time horizon. The algorithms are designed as a multi-echelon inventory problem with single-item capacitated lot-sizing to minimize total costs of running ATM network. In this study, we formulate the problem as a Mixed Integer Program (MIP and develop an approach based on reformulating the model as a shortest path formulation for finding a near-optimal solution of the problem. This reformulation is the same as the traditional model, except the capacity constraints, inventory balance constraints and setup constraints related to the management of the money in ATMs are relaxed. This new formulation gives more variables and constraints, but has a much tighter linear relaxation than the original and is faster to solve for short term planning. Computational results show its effectiveness, especially for large sized problems.

  9. Major evolutionary transitions of life, metabolic scaling and the number and size of mitochondria and chloroplasts.

    Science.gov (United States)

    Okie, Jordan G; Smith, Val H; Martin-Cereceda, Mercedes

    2016-05-25

    We investigate the effects of trophic lifestyle and two types of major evolutionary transitions in individuality-the endosymbiotic acquisition of organelles and development of multicellularity-on organellar and cellular metabolism and allometry. We develop a quantitative framework linking the size and metabolic scaling of eukaryotic cells to the abundance, size and metabolic scaling of mitochondria and chloroplasts and analyse a newly compiled, unprecedented database representing unicellular and multicellular cells covering diverse phyla and tissues. Irrespective of cellularity, numbers and total volumes of mitochondria scale linearly with cell volume, whereas chloroplasts scale sublinearly and sizes of both organelles remain largely invariant with cell size. Our framework allows us to estimate the metabolic scaling exponents of organelles and cells. Photoautotrophic cells and organelles exhibit photosynthetic scaling exponents always less than one, whereas chemoheterotrophic cells and organelles have steeper respiratory scaling exponents close to one. Multicellularity has no discernible effect on the metabolic scaling of organelles and cells. In contrast, trophic lifestyle has a profound and uniform effect, and our results suggest that endosymbiosis fundamentally altered the metabolic scaling of free-living bacterial ancestors of mitochondria and chloroplasts, from steep ancestral scaling to a shallower scaling in their endosymbiotic descendants.

  10. Machine vision approach to auto-generation of high resolution, continental-scale geomorphometric map from DEM

    Science.gov (United States)

    Jasiewicz, J.; Stepinski, T. F.

    2012-04-01

    Geomorphometric map (GM) is a map of landforms delineated exclusively on the basis of their morphology; it depicts a classification of landscape into its constituent elements. GM is a valuable tool for visual terrain analysis, but more importantly, it's a perfect terrain representation for its further algorithmic analysis. GMs themselves are auto-generated from DEM. We have developed a new technique for auto-generation of GMs that is based on the principle of machine vision. Such approach approximates more closely the mapping process of human analyst and results in an efficient generation of GMs having quality and utility superior to maps generated by a standard technique based on differential geometry. The core of the new technique is a notion of geomorphon. A geomorphon is a relief-invariant, orientation-invariant, and size-flexible abstracted elementary unit of terrain. It is calculated from DEM using simple ternary patterns defined on a neighborhood which size adapts to the character of local terrain. Geomorphons are both terrain attributes and landform types at the same time; they allow for a direct and highly efficient, single-step classification and mapping of landforms. There are 498 unique geomorphons but only a small fraction of them are found in typical natural terrain. The geomorphon-based mapping technique is implemented as a GRASS GIS extension written in ANSI C and will be available in the public domain. In order to showcase the capabilities of geomorphons we have calculated the GM for the entire conterminous United States from the 30m/pixel NED DEM. The map shows ten most abundant landforms: flat, peak, ridge, shoulder, spur, slope, hollow, footslope, valley, and pit; a lookup table was used to assign each of the remaining 488 infrequent forms to a morphologically closest mapped form. The result is a unique, never before seen, type of map that clearly shows multiple geomorphic features and indicates the underlying geologic processes. The auto

  11. 3D granulometry: grain-scale shape and size distribution from point cloud dataset of river environments

    Science.gov (United States)

    Steer, Philippe; Lague, Dimitri; Gourdon, Aurélie; Croissant, Thomas; Crave, Alain

    2016-04-01

    The grain-scale morphology of river sediments and their size distribution are important factors controlling the efficiency of fluvial erosion and transport. In turn, constraining the spatial evolution of these two metrics offer deep insights on the dynamics of river erosion and sediment transport from hillslopes to the sea. However, the size distribution of river sediments is generally assessed using statistically-biased field measurements and determining the grain-scale shape of river sediments remains a real challenge in geomorphology. Here we determine, with new methodological approaches based on the segmentation and geomorphological fitting of 3D point cloud dataset, the size distribution and grain-scale shape of sediments located in river environments. Point cloud segmentation is performed using either machine-learning algorithms or geometrical criterion, such as local plan fitting or curvature analysis. Once the grains are individualized into several sub-clouds, each grain-scale morphology is determined using a 3D geometrical fitting algorithm applied on the sub-cloud. If different geometrical models can be conceived and tested, only ellipsoidal models were used in this study. A phase of results checking is then performed to remove grains showing a best-fitting model with a low level of confidence. The main benefits of this automatic method are that it provides 1) an un-biased estimate of grain-size distribution on a large range of scales, from centimeter to tens of meters; 2) access to a very large number of data, only limited by the number of grains in the point-cloud dataset; 3) access to the 3D morphology of grains, in turn allowing to develop new metrics characterizing the size and shape of grains. The main limit of this method is that it is only able to detect grains with a characteristic size greater than the resolution of the point cloud. This new 3D granulometric method is then applied to river terraces both in the Poerua catchment in New-Zealand and

  12. Scaling relation for determining the critical threshold for continuum percolation of overlapping discs of two sizes

    Indian Academy of Sciences (India)

    Ajit C Balram; Deepak Dhar

    2010-01-01

    We study continuum percolation of overlapping circular discs of two sizes. We propose a phenomenological scaling equation for the increase in the effective size of the larger discs due to the presence of the smaller discs. The critical percolation threshold as a function of the ratio of sizes of discs, for different values of the relative areal densities of two discs, can be described in terms of a scaling function of only one variable. The recent accurate Monte Carlo estimates of critical threshold by Quintanilla and Ziff [Phys. Rev. E76, 051115 (2007)] are in very good agreement with the proposed scaling relation.

  13. Meter-scale Urban Land Cover Mapping for EPA EnviroAtlas Using Machine Learning and OBIA Remote Sensing Techniques

    Science.gov (United States)

    Pilant, A. N.; Baynes, J.; Dannenberg, M.; Riegel, J.; Rudder, C.; Endres, K.

    2013-12-01

    US EPA EnviroAtlas is an online collection of tools and resources that provides geospatial data, maps, research, and analysis on the relationships between nature, people, health, and the economy (http://www.epa.gov/research/enviroatlas/index.htm). Using EnviroAtlas, you can see and explore information related to the benefits (e.g., ecosystem services) that humans receive from nature, including clean air, clean and plentiful water, natural hazard mitigation, biodiversity conservation, food, fuel, and materials, recreational opportunities, and cultural and aesthetic value. EPA developed several urban land cover maps at very high spatial resolution (one-meter pixel size) for a portion of EnviroAtlas devoted to urban studies. This urban mapping effort supported analysis of relations among land cover, human health and demographics at the US Census Block Group level. Supervised classification of 2010 USDA NAIP (National Agricultural Imagery Program) digital aerial photos produced eight-class land cover maps for several cities, including Durham, NC, Portland, ME, Tampa, FL, New Bedford, MA, Pittsburgh, PA, Portland, OR, and Milwaukee, WI. Semi-automated feature extraction methods were used to classify the NAIP imagery: genetic algorithms/machine learning, random forest, and object-based image analysis (OBIA). In this presentation we describe the image processing and fuzzy accuracy assessment methods used, and report on some sustainability and ecosystem service metrics computed using this land cover as input (e.g., carbon sequestration from USFS iTREE model; health and demographics in relation to road buffer forest width). We also discuss the land cover classification schema (a modified Anderson Level 1 after the National Land Cover Data (NLCD)), and offer some observations on lessons learned. Meter-scale urban land cover in Portland, OR overlaid on NAIP aerial photo. Streets, buildings and individual trees are identifiable.

  14. Human-machine Scale and Comfort in Packaging Container Modeling Design%包装容器造型设计的人机尺度与舒适度

    Institute of Scientific and Technical Information of China (English)

    黎英; 王建民

    2012-01-01

    Starting from ergonomic principles, it analyzed the human-machine factors in packaging container and the law of comfortable design based on the human-machine scales, systematically investigated the types of comfort from health and medical point of view. It is found that in order to gain physical and mental comfort while using the container so as to create an ideal lifestyle for consumers, scales including human body size, physiological and psychological needs and behaviour must be considered in packing design.%参考健康医学的观点,对舒适度的类型展开了分析,以人机工程学原理为启示,分析了包装容器造型设计的人机因素,探讨了基于人机尺度下包装容器造型的舒适性设计规律。在此基础上,提出了在包装造型设计中把握人体尺寸、生理需求、心理需求、使用行为的尺度,才能使容器在使用过程中获得生理和心理的舒适,从而为消费者营造理想的生活方式。

  15. Verification of Gyrokinetic Particle of Turbulent Simulation of Device Size Scaling Transport

    Institute of Scientific and Technical Information of China (English)

    LIN Zhihong; S. ETHIER; T. S. HAHM; W. M. TANG

    2012-01-01

    Verification and historical perspective are presented on the gyrokinetic particle simulations that discovered the device size scaling of turbulent transport and indentified the geometry model as the source of the long-standing disagreement between gyrokinetic particle and continuum simulations.

  16. Prediction errors in learning drug response from gene expression data - influence of labeling, sample size, and machine learning algorithm.

    Science.gov (United States)

    Bayer, Immanuel; Groth, Philip; Schneckener, Sebastian

    2013-01-01

    Model-based prediction is dependent on many choices ranging from the sample collection and prediction endpoint to the choice of algorithm and its parameters. Here we studied the effects of such choices, exemplified by predicting sensitivity (as IC50) of cancer cell lines towards a variety of compounds. For this, we used three independent sample collections and applied several machine learning algorithms for predicting a variety of endpoints for drug response. We compared all possible models for combinations of sample collections, algorithm, drug, and labeling to an identically generated null model. The predictability of treatment effects varies among compounds, i.e. response could be predicted for some but not for all. The choice of sample collection plays a major role towards lowering the prediction error, as does sample size. However, we found that no algorithm was able to consistently outperform the other and there was no significant difference between regression and two- or three class predictors in this experimental setting. These results indicate that response-modeling projects should direct efforts mainly towards sample collection and data quality, rather than method adjustment.

  17. A Size-Distance Scaling Demonstration Based on the Holway-Boring Experiment

    Science.gov (United States)

    Gallagher, Shawn P.; Hoefling, Crystal L.

    2013-01-01

    We explored size-distance scaling with a demonstration based on the classic Holway-Boring experiment. Undergraduate psychology majors estimated the sizes of two glowing paper circles under two conditions. In the first condition, the environment was dark and, with no depth cues available, participants ranked the circles according to their angular…

  18. A Size-Distance Scaling Demonstration Based on the Holway-Boring Experiment

    Science.gov (United States)

    Gallagher, Shawn P.; Hoefling, Crystal L.

    2013-01-01

    We explored size-distance scaling with a demonstration based on the classic Holway-Boring experiment. Undergraduate psychology majors estimated the sizes of two glowing paper circles under two conditions. In the first condition, the environment was dark and, with no depth cues available, participants ranked the circles according to their angular…

  19. Finite-size scaling of interface free energies in the 3d Ising model

    CERN Document Server

    Pepé, M; Forcrand, Ph. de

    2002-01-01

    We perform a study of the universality of the finite size scaling functions of interface free energies in the 3d Ising model. Close to the hot/cold phase transition, we observe very good agreement with the same scaling functions of the 4d SU(2) Yang--Mills theory at the deconfinement phase transition.

  20. Finite-size scaling of interface free energies in the 3d Ising model

    OpenAIRE

    Pepe, M.; de Forcrand, Ph.

    2001-01-01

    We perform a study of the universality of the finite size scaling functions of interface free energies in the 3d Ising model. Close to the hot/cold phase transition, we observe very good agreement with the same scaling functions of the 4d SU(2) Yang--Mills theory at the deconfinement phase transition.

  1. An Integrated Knowledge Framework to Characterize and Scaffold Size and Scale Cognition (FS2C)

    Science.gov (United States)

    Magana, Alejandra J.; Brophy, Sean P.; Bryan, Lynn A.

    2012-01-01

    Size and scale cognition is a critical ability associated with reasoning with concepts in different disciplines of science, technology, engineering, and mathematics. As such, researchers and educators have identified the need for young learners and their educators to become scale-literate. Informed by developmental psychology literature and recent…

  2. Organelle Size Scaling of the Budding Yeast Vacuole by Relative Growth and Inheritance.

    Science.gov (United States)

    Chan, Yee-Hung M; Reyes, Lorena; Sohail, Saba M; Tran, Nancy K; Marshall, Wallace F

    2016-05-09

    It has long been noted that larger animals have larger organs compared to smaller animals of the same species, a phenomenon termed scaling [1]. Julian Huxley proposed an appealingly simple model of "relative growth"-in which an organ and the whole body grow with their own intrinsic rates [2]-that was invoked to explain scaling in organs from fiddler crab claws to human brains. Because organ size is regulated by complex, unpredictable pathways [3], it remains unclear whether scaling requires feedback mechanisms to regulate organ growth in response to organ or body size. The molecular pathways governing organelle biogenesis are simpler than organogenesis, and therefore organelle size scaling in the cell provides a more tractable case for testing Huxley's model. We ask the question: is it possible for organelle size scaling to arise if organelle growth is independent of organelle or cell size? Using the yeast vacuole as a model, we tested whether mutants defective in vacuole inheritance, vac8Δ and vac17Δ, tune vacuole biogenesis in response to perturbations in vacuole size. In vac8Δ/vac17Δ, vacuole scaling increases with the replicative age of the cell. Furthermore, vac8Δ/vac17Δ cells continued generating vacuole at roughly constant rates even when they had significantly larger vacuoles compared to wild-type. With support from computational modeling, these results suggest there is no feedback between vacuole biogenesis rates and vacuole or cell size. Rather, size scaling is determined by the relative growth rates of the vacuole and the cell, thus representing a cellular version of Huxley's model.

  3. An extensible operating system design for large-scale parallel machines.

    Energy Technology Data Exchange (ETDEWEB)

    Riesen, Rolf E.; Ferreira, Kurt Brian

    2009-04-01

    Running untrusted user-level code inside an operating system kernel has been studied in the 1990's but has not really caught on. We believe the time has come to resurrect kernel extensions for operating systems that run on highly-parallel clusters and supercomputers. The reason is that the usage model for these machines differs significantly from a desktop machine or a server. In addition, vendors are starting to add features, such as floating-point accelerators, multicore processors, and reconfigurable compute elements. An operating system for such machines must be adaptable to the requirements of specific applications and provide abstractions to access next-generation hardware features, without sacrificing performance or scalability.

  4. Size-weight illusion and anticipatory grip force scaling following unilateral cortical brain lesion.

    Science.gov (United States)

    Li, Yong; Randerath, Jennifer; Goldenberg, Georg; Hermsdörfer, Joachim

    2011-04-01

    The prediction of object weight from its size is an important prerequisite of skillful object manipulation. Grip and load forces anticipate object size during early phases of lifting an object. A mismatch between predicted and actual weight when two different sized objects have the same weight results in the size-weight illusion (SWI), the small object feeling heavier. This study explores whether lateralized brain lesions in patients with or without apraxia alter the size-weight illusion and impair anticipatory finger force scaling. Twenty patients with left brain damage (LBD, 10 with apraxia, 10 without apraxia), ten patients with right brain damage (RBD), and matched control subjects lifted two different-sized boxes in alternation. All subjects experienced a similar size-weight illusion. The anticipatory force scaling of all groups was in correspondence with the size cue: higher forces and force rates were applied to the big box and lower forces and force rates to the small box during the first lifts. Within few lifts, forces were scaled to actual object weight. Despite the lack of significant differences at group level, 5 out of 20 LBD patients showed abnormal predictive scaling of grip forces. They differed from the LBD patients with normal predictive scaling by a greater incidence of posterior occipito-parietal lesions but not by a greater incidence of apraxia. The findings do not support a more general role for the motor-dominant left hemisphere, or an influence of apraxia per se, in the scaling of finger force according to object properties. However, damage in the vicinity of the parietal-occipital junction may be critical for deriving predictions of weight from size.

  5. Large-scale fabrication of micro-lens array by novel end-fly-cutting-servo diamond machining.

    Science.gov (United States)

    Zhu, Zhiwei; To, Suet; Zhang, Shaojian

    2015-08-10

    Fast/slow tool servo (FTS/STS) diamond turning is a very promising technique for the generation of micro-lens array (MLA). However, it is still a challenge to process MLA in large scale due to certain inherent limitations of this technique. In the present study, a novel ultra-precision diamond cutting method, as the end-fly-cutting-servo (EFCS) system, is adopted and investigated for large-scale generation of MLA. After a detailed discussion of the characteristic advantages for processing MLA, the optimal toolpath generation strategy for the EFCS is developed with consideration of the geometry and installation pose of the diamond tool. A typical aspheric MLA over a large area is experimentally fabricated, and the resulting form accuracy, surface micro-topography and machining efficiency are critically investigated. The result indicates that the MLA with homogeneous quality over the whole area is obtained. Besides, high machining efficiency, extremely small volume of control points for the toolpath, and optimal usage of system dynamics of the machine tool during the whole cutting can be simultaneously achieved.

  6. In Vivo Single-Cell Fluorescence and Size Scaling of Phytoplankton Chlorophyll Content.

    Science.gov (United States)

    Álvarez, Eva; Nogueira, Enrique; López-Urrutia, Ángel

    2017-04-01

    In unicellular phytoplankton, the size scaling exponent of chlorophyll content per cell decreases with increasing light limitation. Empirical studies have explored this allometry by combining data from several species, using average values of pigment content and cell size for each species. The resulting allometry thus includes phylogenetic and size scaling effects. The possibility of measuring single-cell fluorescence with imaging-in-flow cytometry devices allows the study of the size scaling of chlorophyll content at both the inter- and intraspecific levels. In this work, the changing allometry of chlorophyll content was estimated for the first time for single phytoplankton populations by using data from a series of incubations with monocultures exposed to different light levels. Interspecifically, our experiments confirm previous modeling and experimental results of increasing size scaling exponents with increasing irradiance. A similar pattern was observed intraspecifically but with a larger variability in size scaling exponents. Our results show that size-based processes and geometrical approaches explain variations in chlorophyll content. We also show that the single-cell fluorescence measurements provided by imaging-in-flow devices can be applied to field samples to understand the changes in the size dependence of chlorophyll content in response to environmental variables affecting primary production.IMPORTANCE The chlorophyll concentrations in phytoplankton register physiological adjustments in cellular pigmentation arising mainly from changes in light conditions. The extent of these adjustments is constrained by the size of the phytoplankton cells, even within single populations. Hence, variations in community chlorophyll derived from photoacclimation are also dependent on the phytoplankton size distribution.

  7. Finite-size scaling study of the three-dimensional classical Heisenberg model

    CERN Document Server

    Holm, C; Holm, Christian; Janke, Wolfhard

    1993-01-01

    We use the single-cluster Monte Carlo update algorithm to simulate the three-dimensional classical Heisenberg model in the critical region on simple cubic lattices of size $L^3$ with $L=12, 16, 20, 24, 32, 40$, and $48$. By means of finite-size scaling analyses we compute high-precision estimates of the critical temperature and the critical exponents, using extensively histogram reweighting and optimization techniques. Measurements of the autocorrelation time show the expected reduction of critical slowing down at the phase transition. This allows simulations on significantly larger lattices than in previous studies and consequently a better control over systematic errors in finite-size scaling analyses.

  8. Finite-size scaling analysis of a nonequilibrium phase transition in the naming game model

    Science.gov (United States)

    Brigatti, E.; Hernández, A.

    2016-11-01

    We realize an extensive numerical study of the naming game model with a noise term which accounts for perturbations. This model displays a nonequilibrium phase transition between an absorbing ordered consensus state, which occurs for small noise, and a disordered phase with fragmented clusters characterized by heterogeneous memories, which emerges at strong noise levels. The nature of the phase transition is studied by means of a finite-size scaling analysis of the moments. We observe a scaling behavior typical of a discontinuous transition and we are able to estimate the thermodynamic limit. The scaling behavior of the clusters size seems also compatible with this kind of transition.

  9. Finite size scaling analysis of a nonequilibrium phase transition in the naming game model

    CERN Document Server

    Brigatti, E

    2016-01-01

    We realize an extensive numerical study of the Naming Game model with a noise term which accounts for perturbations. This model displays a non-equilibrium phase transition between an absorbing ordered consensus state, which occurs for small noise, and a disordered phase with fragmented clusters characterized by heterogeneous memories, which emerges at strong noise levels. The nature of the phase transition is studied by means of a finite-size scaling analysis of the moments. We observe a scaling behavior typical of a discontinuous transition and we are able to estimate the thermodynamic limit. The scaling behavior of the clusters size seems also compatible with this kind of transition.

  10. Multi-machine scaling of the main SOL parallel heat flux width in tokamak limiter plasmas

    Science.gov (United States)

    Horacek, J.; Pitts, R. A.; Adamek, J.; Arnoux, G.; Bak, J.-G.; Brezinsek, S.; Dimitrova, M.; Goldston, R. J.; Gunn, J. P.; Havlicek, J.; Hong, S.-H.; Janky, F.; LaBombard, B.; Marsen, S.; Maddaluno, G.; Nie, L.; Pericoli, V.; Popov, Tsv; Panek, R.; Rudakov, D.; Seidl, J.; Seo, D. S.; Shimada, M.; Silva, C.; Stangeby, P. C.; Viola, B.; Vondracek, P.; Wang, H.; Xu, G. S.; Xu, Y.; Contributors, JET

    2016-07-01

    As in many of today’s tokamaks, plasma start-up in ITER will be performed in limiter configuration on either the inner or outer midplane first wall (FW). The massive, beryllium armored ITER FW panels are toroidally shaped to protect panel-to-panel misalignments, increasing the deposited power flux density compared with a purely cylindrical surface. The chosen shaping should thus be optimized for a given radial profile of parallel heat flux, {{q}||} in the scrape-off layer (SOL) to ensure optimal power spreading. For plasmas limited on the outer wall in tokamaks, this profile is commonly observed to decay exponentially as {{q}||}={{q}0}\\text{exp} ~≤ft(-r/λ q\\text{omp}\\right) , or, for inner wall limiter plasmas with the double exponential decay comprising a sharp near-SOL feature and a broader main SOL width, λ q\\text{omp} . The initial choice of λ q\\text{omp} , which is critical in ensuring that current ramp-up or down will be possible as planned in the ITER scenario design, was made on the basis of an extremely restricted L-mode divertor dataset, using infra-red thermography measurements on the outer divertor target to extrapolate to a heat flux width at the main plasma midplane. This unsatisfactory situation has now been significantly improved by a dedicated multi-machine ohmic and L-mode limiter plasma study, conducted under the auspices of the International Tokamak Physics Activity, involving 11 tokamaks covering a wide parameter range with R=\\text{0}\\text{.4--2}\\text{.8} \\text{m}, {{B}0}=\\text{1}\\text{.2--7}\\text{.5} \\text{T}, {{I}\\text{p}}=\\text{9--2500} \\text{kA}. Measurements of λ q\\text{omp} in the database are made exclusively on all devices using a variety of fast reciprocating Langmuir probes entering the plasma at a variety of poloidal locations, but with the majority being on the low field side. Statistical analysis of the database reveals nine reasonable engineering and dimensionless scalings. All yield, however, similar

  11. Erosive Augmentation of Solid Propellant Burning Rate: Motor Size Scaling Effect

    Science.gov (United States)

    Strand, L. D.; Cohen, Norman S.

    1990-01-01

    Two different independent variable forms, a difference form and a ratio form, were investigated for correlating the normalized magnitude of the measured erosive burning rate augmentation above the threshold in terms of the amount that the driving parameter (mass flux or Reynolds number) exceeds the threshold value for erosive augmentation at the test condition. The latter was calculated from the previously determined threshold correlation. Either variable form provided a correlation for each of the two motor size data bases individually. However, the data showed a motor size effect, supporting the general observation that the magnitude of erosive burning rate augmentation is reduced for larger rocket motors. For both independent variable forms, the required motor size scaling was attained by including the motor port radius raised to a power in the independent parameter. A boundary layer theory analysis confirmed the experimental finding, but showed that the magnitude of the scale effect is itself dependent upon scale, tending to diminish with increasing motor size.

  12. Investigations of grain size dependent sediment transport phenomena on multiple scales

    Science.gov (United States)

    Thaxton, Christopher S.

    Sediment transport processes in coastal and fluvial environments resulting from disturbances such as urbanization, mining, agriculture, military operations, and climatic change have significant impact on local, regional, and global environments. Primarily, these impacts include the erosion and deposition of sediment, channel network modification, reduction in downstream water quality, and the delivery of chemical contaminants. The scale and spatial distribution of these effects are largely attributable to the size distribution of the sediment grains that become eligible for transport. An improved understanding of advective and diffusive grain-size dependent sediment transport phenomena will lead to the development of more accurate predictive models and more effective control measures. To this end, three studies were performed that investigated grain-size dependent sediment transport on three different scales. Discrete particle computer simulations of sheet flow bedload transport on the scale of 0.1--100 millimeters were performed on a heterogeneous population of grains of various grain sizes. The relative transport rates and diffusivities of grains under both oscillatory and uniform, steady flow conditions were quantified. These findings suggest that boundary layer formalisms should describe surface roughness through a representative grain size that is functionally dependent on the applied flow parameters. On the scale of 1--10m, experiments were performed to quantify the hydrodynamics and sediment capture efficiency of various baffles installed in a sediment retention pond, a commonly used sedimentation control measure in watershed applications. Analysis indicates that an optimum sediment capture effectiveness may be achieved based on baffle permeability, pond geometry and flow rate. Finally, on the scale of 10--1,000m, a distributed, bivariate watershed terain evolution module was developed within GRASS GIS. Simulation results for variable grain sizes and for

  13. Size effects and internal length scales in the elasticity of random fiber networks

    Science.gov (United States)

    Picu, Catalin; Berkache, Kamel; Shahsavari, Ali; Ganghoffer, Jean-Francois

    Random fiber networks are the structural element of many biological and man-made materials, including connective tissue, various consumer products and packaging materials. In all cases of practical interest the scale at which the material is used and the scale of the fiber diameter or the mean segment length of the network are separated by several orders of magnitude. This precludes solving boundary value problems defined on the scale of the application while resolving every fiber in the system, and mandates the development of continuum equivalent models. To this end, we study the intrinsic geometric and mechanical length scales of the network and the size effect associated with them. We consider both Cauchy and micropolar continuum models and calibrate them based on the discrete network behavior. We develop a method to predict the characteristic length scales of the problem and the minimum size of a representative element of the network based on network structural parameters and on fiber properties.

  14. Extrapolating population size from the occupancy-abundance relationship and the scaling pattern of occupancy

    DEFF Research Database (Denmark)

    Hui, Cang; McGeoch, Melodie A.; Reyers, Belinda

    2009-01-01

    . Six models were based on the intraspecific occupancy-abundance relationship (OAR); the other two on the scaling pattern of species occupancy (SPO), which quantifies the decline in species range size when measured across progressively finer scales. The performance of these models was examined using...... are suitable for data at larger spatial scales because they are based on the scale dependence of species range size and incorporate environmental heterogeneity (assuming fractal habitat structure or performing a Bayesian estimate of occupancy). Therefore, SPO models are recommended for assemblage......The estimation of species abundances at regional scales requires a cost-efficient method that can be applied to existing broadscale data. We compared the performance of eight models for estimating species abundance and community structure from presence-absence maps of the southern African avifauna...

  15. The effects of electrode size and discharged power on micro-electro-discharge machining drilling of stainless steel

    Directory of Open Access Journals (Sweden)

    Gianluca D’Urso

    2016-05-01

    Full Text Available This article is about the measurement of actual micro-electro-discharge machining parameters and the statistical analysis of their influence on the process performances. In particular, the discharged power was taken into account as a comprehensive variable able to represent the effect of peak current and voltage on the final result. Thanks to the dedicated signal acquisition system, a correlation among the discharged power and the indexes representing the process parameters was shown. Finally, linear and non-linear regression approaches were implemented in order to obtain predictive equations for the most important aspects of micro-electro-discharge machining, such as the machining time and the electrode wear.

  16. Finite-size scaling of entanglement entropy in one-dimensional topological models

    Science.gov (United States)

    Wang, Yuting; Gulden, Tobias; Kamenev, Alex

    2017-02-01

    We consider scaling of the entanglement entropy across a topological quantum phase transition for the Kitaev chain model. The change of the topology manifests itself in a subleading term, which scales as L-1 /α with the size of the subsystem L , here α is the Rényi index. This term reveals the scaling function hα(L /ξ ) , where ξ is the correlation length, which is sensitive to the topological index. The scaling function hα(L /ξ ) is independent of model parameters, suggesting some degree of its universality.

  17. A Multi-scale, Multi-Model, Machine-Learning Solar Forecasting Technology”

    Energy Technology Data Exchange (ETDEWEB)

    Hamann, Hendrik

    2017-05-31

    The goal of the project was the development and demonstration of a significantly improved solar forecasting technology (short: Watt-sun), which leverages new big data processing technologies and machine-learnt blending between different models and forecast systems. The technology aimed demonstrating major advances in accuracy as measured by existing and new metrics which themselves were developed as part of this project. Finally, the team worked with Independent System Operators (ISOs) and utilities to integrate the forecasts into their operations.

  18. The rank-size scaling law and entropy-maximizing principle

    Science.gov (United States)

    Chen, Yanguang

    2012-02-01

    The rank-size regularity known as Zipf's law is one of the scaling laws and is frequently observed in the natural living world and social institutions. Many scientists have tried to derive the rank-size scaling relation through entropy-maximizing methods, but they have not been entirely successful. By introducing a pivotal constraint condition, I present here a set of new derivations based on the self-similar hierarchy of cities. First, I derive a pair of exponent laws by postulating local entropy maximizing. From the two exponential laws follows a general hierarchical scaling law, which implies the general form of Zipf's law. Second, I derive a special hierarchical scaling law with the exponent equal to 1 by postulating global entropy maximizing, and this implies the pure form of Zipf's law. The rank-size scaling law has proven to be one of the special cases of the hierarchical scaling law, and the derivation suggests a certain scaling range with the first or the last data point as an outlier. The entropy maximization of social systems differs from the notion of entropy increase in thermodynamics. For urban systems, entropy maximizing suggests the greatest equilibrium between equity for parts/individuals and efficiency of the whole.

  19. Law machines: scale models, forensic materiality and the making of modern patent law.

    Science.gov (United States)

    Pottage, Alain

    2011-10-01

    Early US patent law was machine made. Before the Patent Office took on the function of examining patent applications in 1836, questions of novelty and priority were determined in court, within the forum of the infringement action. And at all levels of litigation, from the circuit courts up to the Supreme Court, working models were the media through which doctrine, evidence and argument were made legible, communicated and interpreted. A model could be set on a table, pointed at, picked up, rotated or upended so as to display a point of interest to a particular audience within the courtroom, and, crucially, set in motion to reveal the 'mode of operation' of a machine. The immediate object of demonstration was to distinguish the intangible invention from its tangible embodiment, but models also'machined' patent law itself. Demonstrations of patent claims with models articulated and resolved a set of conceptual tensions that still make the definition and apprehension of the invention difficult, even today, but they resolved these tensions in the register of materiality, performativity and visibility, rather than the register of conceptuality. The story of models tells us something about how inventions emerge and subsist within the context of patent litigation and patent doctrine, and it offers a starting point for renewed reflection on the question of how technology becomes property.

  20. Coupling machine learning with mechanistic models to study runoff production and river flow at the hillslope scale

    Science.gov (United States)

    Marçais, J.; Gupta, H. V.; De Dreuzy, J. R.; Troch, P. A. A.

    2016-12-01

    Geomorphological structure and geological heterogeneity of hillslopes are major controls on runoff responses. The diversity of hillslopes (morphological shapes and geological structures) on one hand, and the highly non linear runoff mechanism response on the other hand, make it difficult to transpose what has been learnt at one specific hillslope to another. Therefore, making reliable predictions on runoff appearance or river flow for a given hillslope is a challenge. Applying a classic model calibration (based on inverse problems technique) requires doing it for each specific hillslope and having some data available for calibration. When applied to thousands of cases it cannot always be promoted. Here we propose a novel modeling framework based on coupling process based models with data based approach. First we develop a mechanistic model, based on hillslope storage Boussinesq equations (Troch et al. 2003), able to model non linear runoff responses to rainfall at the hillslope scale. Second we set up a model database, representing thousands of non calibrated simulations. These simulations investigate different hillslope shapes (real ones obtained by analyzing 5m digital elevation model of Brittany and synthetic ones), different hillslope geological structures (i.e. different parametrizations) and different hydrologic forcing terms (i.e. different infiltration chronicles). Then, we use this model library to train a machine learning model on this physically based database. Machine learning model performance is then assessed by a classic validating phase (testing it on new hillslopes and comparing machine learning with mechanistic outputs). Finally we use this machine learning model to learn what are the hillslope properties controlling runoffs. This methodology will be further tested combining synthetic datasets with real ones.

  1. Eggs as energy: revisiting the scaling of egg size and energetic content among echinoderms.

    Science.gov (United States)

    Moran, A L; McAlister, J S; Whitehill, E A G

    2013-08-01

    Marine organisms exhibit substantial life-history diversity, of which egg size is one fundamental parameter. The size of an egg is generally assumed to reflect the amount of energy it contains and the amount of per-offspring maternal investment. Egg size and energy are thought to scale isometrically. We investigated this relationship by updating published datasets for echinoderms, increasing the number of species over those in previous studies by 62%. When we plotted egg energy versus egg size in the updated dataset we found that planktotrophs have a scaling factor significantly lower than 1, demonstrating an overall trend toward lower energy density in larger planktotrophic eggs. By looking within three genera, Echinometra, Strongylocentrotus, and Arbacia, we also found that the scaling exponent differed among taxa, and that in Echinometra, energy density was significantly lower in species with larger eggs. Theoretical models generally assume a strong tradeoff between egg size and fecundity that limits energetic investment and constrains life-history evolution. These data suggest that the evolution of egg size and egg energy content can be decoupled, possibly facilitating response to selective factors such as sperm limitation which could act on volume alone.

  2. Scale-dependent feedbacks between patch size and plant reproduction in desert grassland

    Science.gov (United States)

    Svejcar, Lauren N.; Bestelmeyer, Brandon T.; Duniway, Michael C.; James, Darren K.

    2015-01-01

    Theoretical models suggest that scale-dependent feedbacks between plant reproductive success and plant patch size govern transitions from highly to sparsely vegetated states in drylands, yet there is scant empirical evidence for these mechanisms. Scale-dependent feedback models suggest that an optimal patch size exists for growth and reproduction of plants and that a threshold patch organization exists below which positive feedbacks between vegetation and resources can break down, leading to critical transitions. We examined the relationship between patch size and plant reproduction using an experiment in a Chihuahuan Desert grassland. We tested the hypothesis that reproductive effort and success of a dominant grass (Bouteloua eriopoda) would vary predictably with patch size. We found that focal plants in medium-sized patches featured higher rates of grass reproductive success than when plants occupied either large patch interiors or small patches. These patterns support the existence of scale-dependent feedbacks in Chihuahuan Desert grasslands and indicate an optimal patch size for reproductive effort and success in B. eriopoda. We discuss the implications of these results for detecting ecological thresholds in desert grasslands.

  3. Development and Validation of the Body Size Scale for Assessing Body Weight Perception in African Populations

    OpenAIRE

    Cohen, Emmanuel; Bernard, Jonathan Y.; Ponty, Amandine; Ndao, Amadou; Amougou, Norbert; Saïd-Mohamed, Rihlat; Pasquet, Patrick

    2015-01-01

    Background The social valorisation of overweight in African populations could promote high-risk eating behaviours and therefore become a risk factor of obesity. However, existing scales to assess body image are usually not accurate enough to allow comparative studies of body weight perception in different African populations. This study aimed to develop and validate the Body Size Scale (BSS) to estimate African body weight perception. Methods Anthropometric measures of 80 Cameroonians and 81 ...

  4. The Rank-Size Scaling Law and Entropy-Maximizing Principle

    CERN Document Server

    Chen, Yanguang

    2011-01-01

    The rank-size regularity known as Zipf's law is one of scaling laws and frequently observed within the natural living world and in social institutions. Many scientists tried to derive the rank-size scaling relation by entropy-maximizing methods, but the problem failed to be resolved thoroughly. By introducing a pivotal constraint condition, I present here a set of new derivations based on the self-similar hierarchy of cities. First, I derive a pair of exponent laws by postulating local entropy maximizing. From the two exponential laws follows a general hierarchical scaling law, which implies general Zipf's law. Second, I derive a special hierarchical scaling law with exponent equal to 1 by postulating global entropy maximizing, and this implies the strong form of Zipf's law. The rank-size scaling law proved to be one of the special cases of the hierarchical law, and the derivation suggests a certain scaling range with the first or last data point as an outlier. The entropy maximization of social systems diffe...

  5. Research and development on cutting scale machine in the coalmine shaft

    Institute of Scientific and Technical Information of China (English)

    REN Bao-cai(任保才)

    2004-01-01

    The deposit scale in the coal mine shaft usually causes serious accidents, such as making rope broken, cage seized or dropped. To solve this kind of problems, the research of the cutting scale mechanism was made, and a new type of removal scale equipment was made with using imported hard alloy material. The cutting experiment and actual cutting show that it can adapt to abominable condition in the shaft, such as narrow space, wet and excessive shaft crevice water and so on, and can work safely and reliably, and has high cutting scale efficiency. It can also cut out the deposit scale in the circular section of shaft.

  6. A decomposition heuristics based on multi-bottleneck machines for large-scale job shop scheduling problems

    Directory of Open Access Journals (Sweden)

    Yingni Zhai

    2014-10-01

    Full Text Available Purpose: A decomposition heuristics based on multi-bottleneck machines for large-scale job shop scheduling problems (JSP is proposed.Design/methodology/approach: In the algorithm, a number of sub-problems are constructed by iteratively decomposing the large-scale JSP according to the process route of each job. And then the solution of the large-scale JSP can be obtained by iteratively solving the sub-problems. In order to improve the sub-problems' solving efficiency and the solution quality, a detection method for multi-bottleneck machines based on critical path is proposed. Therewith the unscheduled operations can be decomposed into bottleneck operations and non-bottleneck operations. According to the principle of “Bottleneck leads the performance of the whole manufacturing system” in TOC (Theory Of Constraints, the bottleneck operations are scheduled by genetic algorithm for high solution quality, and the non-bottleneck operations are scheduled by dispatching rules for the improvement of the solving efficiency.Findings: In the process of the sub-problems' construction, partial operations in the previous scheduled sub-problem are divided into the successive sub-problem for re-optimization. This strategy can improve the solution quality of the algorithm. In the process of solving the sub-problems, the strategy that evaluating the chromosome's fitness by predicting the global scheduling objective value can improve the solution quality.Research limitations/implications: In this research, there are some assumptions which reduce the complexity of the large-scale scheduling problem. They are as follows: The processing route of each job is predetermined, and the processing time of each operation is fixed. There is no machine breakdown, and no preemption of the operations is allowed. The assumptions should be considered if the algorithm is used in the actual job shop.Originality/value: The research provides an efficient scheduling method for the

  7. FRACTAL SCALING OF PARTICLE AND PORE SIZE DISTRIBUTIONS AND ITS RELATION TO SOIL HYDRAULIC CONDUCTIVITY

    Directory of Open Access Journals (Sweden)

    BACCHI O.O.S.

    1996-01-01

    Full Text Available Fractal scaling has been applied to soils, both for void and solid phases, as an approach to characterize the porous arrangement, attempting to relate particle-size distribution to soil water retention and soil water dynamic properties. One important point of such an analysis is the assumption that the void space geometry of soils reflects its solid phase geometry, taking into account that soil pores are lined by the full range of particles, and that their fractal dimension, which expresses their tortuosity, could be evaluated by the fractal scaling of particle-size distribution. Other authors already concluded that although fractal scaling plays an important role in soil water retention and porosity, particle-size distribution alone is not sufficient to evaluate the fractal structure of porosity. It is also recommended to examine the relationship between fractal properties of solids and of voids, and in some special cases, look for an equivalence of both fractal dimensions. In the present paper data of 42 soil samples were analyzed in order to compare fractal dimensions of pore-size distribution, evaluated by soil water retention curves (SWRC of soils, with fractal dimensions of soil particle-size distributions (PSD, taking the hydraulic conductivity as a standard variable for the comparison, due to its relation to tortuosity. A new procedure is proposed to evaluate the fractal dimension of pore-size distribution. Results indicate a better correlation between fractal dimensions of pore-size distribution and the hydraulic conductivity for this set of soils, showing that for most of the soils analyzed there is no equivalence of both fractal dimensions. For most of these soils the fractal dimension of particle-size distribution does not indicate properly the pore trace tortuosity. A better equivalence of both fractal dimensions was found for sandy soils.

  8. Studying time of flight imaging through scattering media across multiple size scales (Conference Presentation)

    Science.gov (United States)

    Velten, Andreas

    2017-05-01

    Light scattering is a primary obstacle to optical imaging in a variety of different environments and across many size and time scales. Scattering complicates imaging on large scales when imaging through the atmosphere when imaging from airborne or space borne platforms, through marine fog, or through fog and dust in vehicle navigation, for example in self driving cars. On smaller scales, scattering is the major obstacle when imaging through human tissue in biomedical applications. Despite the large variety of participating materials and size scales, light transport in all these environments is usually described with very similar scattering models that are defined by the same small set of parameters, including scattering and absorption length and phase function. We attempt a study of scattering and methods of imaging through scattering across different scales and media, particularly with respect to the use of time of flight information. We can show that using time of flight, in addition to spatial information, provides distinct advantages in scattering environments. By performing a comparative study of scattering across scales and media, we are able to suggest scale models for scattering environments to aid lab research. We also can transfer knowledge and methodology between different fields.

  9. The characteristic scale as a consistent indicator of lung nodule size in CT imaging

    OpenAIRE

    Diciotti, Stefano; Lombardo, Simone; Coppini, Giuseppe; Grassi, L.; Petrolo, L; G.Picozzi; Falchini, Massimo; Mascalchi, Mario

    2008-01-01

    Nodule growth as observed in CT scans is the primary malignancy clue of indeterminate small lung nodules. A new approach to assess the 3D size of lung nodules which is based on LoG scale space theory is described. Validation using private (ITALUNG) and public (LIDC) data-set are described.

  10. Finite size scaling analysis of intermittency moments in the two dimensional Ising model

    CERN Document Server

    Burda, Z; Peschanski, R; Wosiek, J

    1993-01-01

    Finite size scaling is shown to work very well for the block variables used in intermittency studies on a 2-d Ising lattice. The intermittency exponents so derived exhibit the expected relations to the magnetic critical exponent of the model. Email contact: pesch@amoco.saclay.cea.fr

  11. Size Reduction Techniques for Large Scale Permanent Magnet Generators in Wind Turbines

    Science.gov (United States)

    Khazdozian, Helena; Hadimani, Ravi; Jiles, David

    2015-03-01

    Increased wind penetration is necessary to reduce U.S. dependence on fossil fuels, combat climate change and increase national energy security. The U.S Department of Energy has recommended large scale and offshore wind turbines to achieve 20% wind electricity generation by 2030. Currently, geared doubly-fed induction generators (DFIGs) are typically employed in the drivetrain for conversion of mechanical to electrical energy. Yet, gearboxes account for the greatest downtime of wind turbines, decreasing reliability and contributing to loss of profit. Direct drive permanent magnet generators (PMGs) offer a reliable alternative to DFIGs by eliminating the gearbox. However, PMGs scale up in size and weight much more rapidly than DFIGs as rated power is increased, presenting significant challenges for large scale wind turbine application. Thus, size reduction techniques are needed for viability of PMGs in large scale wind turbines. Two size reduction techniques are presented. It is demonstrated that 25% size reduction of a 10MW PMG is possible with a high remanence theoretical permanent magnet. Additionally, the use of a Halbach cylinder in an outer rotor PMG is investigated to focus magnetic flux over the rotor surface in order to increase torque. This work was supported by the National Science Foundation under Grant No. 1069283 and a Barbara and James Palmer Endowment at Iowa State University.

  12. Atomic-Scale Modeling of Particle Size Effects for the Oxygen Reduction Reaction of Pt

    DEFF Research Database (Denmark)

    Tritsaris, Georgios; Greeley, Jeffrey Philip; Rossmeisl, Jan;

    2011-01-01

    in both the specific and mass activities for particle sizes in the range between 2 and 30 nm. The mass activity is calculated to be maximized for particles of a diameter between 2 and 4 nm. Our study demonstrates how an atomic-scale description of the surface microstructure is a key component...

  13. Additive scales in degenerative disease - calculation of effect sizes and clinical judgment

    Directory of Open Access Journals (Sweden)

    Riepe Matthias W

    2011-12-01

    Full Text Available Abstract Background The therapeutic efficacy of an intervention is often assessed in clinical trials by scales measuring multiple diverse activities that are added to produce a cumulative global score. Medical communities and health care systems subsequently use these data to calculate pooled effect sizes to compare treatments. This is done because major doubt has been cast over the clinical relevance of statistically significant findings relying on p values with the potential to report chance findings. Hence in an aim to overcome this pooling the results of clinical studies into a meta-analyses with a statistical calculus has been assumed to be a more definitive way of deciding of efficacy. Methods We simulate the therapeutic effects as measured with additive scales in patient cohorts with different disease severity and assess the limitations of an effect size calculation of additive scales which are proven mathematically. Results We demonstrate that the major problem, which cannot be overcome by current numerical methods, is the complex nature and neurobiological foundation of clinical psychiatric endpoints in particular and additive scales in general. This is particularly relevant for endpoints used in dementia research. 'Cognition' is composed of functions such as memory, attention, orientation and many more. These individual functions decline in varied and non-linear ways. Here we demonstrate that with progressive diseases cumulative values from multidimensional scales are subject to distortion by the limitations of the additive scale. The non-linearity of the decline of function impedes the calculation of effect sizes based on cumulative values from these multidimensional scales. Conclusions Statistical analysis needs to be guided by boundaries of the biological condition. Alternatively, we suggest a different approach avoiding the error imposed by over-analysis of cumulative global scores from additive scales.

  14. Scaling of traction forces with the size of cohesive cell colonies.

    Science.gov (United States)

    Mertz, Aaron F; Banerjee, Shiladitya; Che, Yonglu; German, Guy K; Xu, Ye; Hyland, Callen; Marchetti, M Cristina; Horsley, Valerie; Dufresne, Eric R

    2012-05-11

    To understand how the mechanical properties of tissues emerge from interactions of multiple cells, we measure traction stresses of cohesive colonies of 1-27 cells adherent to soft substrates. We find that traction stresses are generally localized at the periphery of the colony and the total traction force scales with the colony radius. For large colony sizes, the scaling appears to approach linear, suggesting the emergence of an apparent surface tension of the order of 10(-3)  N/m. A simple model of the cell colony as a contractile elastic medium coupled to the substrate captures the spatial distribution of traction forces and the scaling of traction forces with the colony size.

  15. Grain size in lithospheric-scale shear zones: Chicken or Egg?

    Science.gov (United States)

    Thielmann, M.; Rozel, A.; Kaus, B. J. P.; Ricard, Y.

    2012-04-01

    Lithospheric-scale shear zones are commonly defined as regions inhomogeneous and localized deformation. Strain softening has been demonstrated to be necessary for localization in those shear zones, but there is still debate about the physical cause of this softening. As natural shear zones typically have a significantly reduced grain size, it has been proposed that grain size reduction provides the necessary strain softening to localize deformation. As grain size reduces, the dominant deformation mechanism switches from dislocation to diffusion creep, thus requiring less stress to deform the rock. Until recently, the equilibrium grain size has been thought to follow a piezometric relationship, thus indicating the stress under which a shear zone deformed. More recent work (Austin and Evans (2007), Rozel et. al. (2011)) suggests that the equilibrium grain size is not dependent on stress, but rather on the deformational work. Using this relationship, we use numerical models to investigate the effect of grain size evolution on lithospheric deformation. We focus on the question if grain size provides sufficient weakening to effectively localize deformation under lithospheric conditions or if it's effect is rather passive and as such a marker for the deformational work done in a shear zone. We then compare the localization potential of grain size reduction to shear heating and investigate the interplay between the two weakening mechanisms.

  16. Subcascade formation and defect cluster size scaling in high-energy collision events in metals

    Science.gov (United States)

    De Backer, A.; Sand, A. E.; Nordlund, K.; Luneville, L.; Simeone, D.; Dudarev, S. L.

    2016-07-01

    It has been recently established that the size of the defects created under ion irradiation follows a scaling law (Sand A. E. et al., EPL, 103 (2013) 46003; Yi X. et al., EPL, 110 (2015) 36001). A critical constraint associated with its application to phenomena occurring over a broad range of irradiation conditions is the limitation on the energy of incident particles. Incident neutrons or ions, with energies exceeding a certain energy threshold, produce a complex hierarchy of collision subcascade events, which impedes the use of the defect cluster size scaling law derived for an individual low-energy cascade. By analyzing the statistics of subcascade sizes and energies, we show that defect clustering above threshold energies can be described by a product of two scaling laws, one for the sizes of subcascades and the other for the sizes of defect clusters formed in subcascades. The statistics of subcascade sizes exhibits a transition at a threshold energy, where the subcascade morphology changes from a single domain below the energy threshold, to several or many sub-domains above the threshold. The number of sub-domains then increases in proportion to the primary knock-on atom energy. The model has been validated against direct molecular-dynamics simulations and applied to W, Fe, Be, Zr and sixteen other metals, enabling the prediction of full statistics of defect cluster sizes with no limitation on the energy of cascade events. We find that populations of defect clusters produced by the fragmented high-energy cascades are dominated by individual Frenkel pairs and relatively small defect clusters, whereas the lower-energy non-fragmented cascades produce a greater proportion of large defect clusters.

  17. A Gaussian Belief Propagation Solver for Large Scale Support Vector Machines

    CERN Document Server

    Bickson, Danny; Dolev, Danny

    2008-01-01

    Support vector machines (SVMs) are an extremely successful type of classification and regression algorithms. Building an SVM entails solving a constrained convex quadratic programming problem, which is quadratic in the number of training samples. We introduce an efficient parallel implementation of an support vector regression solver, based on the Gaussian Belief Propagation algorithm (GaBP). In this paper, we demonstrate that methods from the complex system domain could be utilized for performing efficient distributed computation. We compare the proposed algorithm to previously proposed distributed and single-node SVM solvers. Our comparison shows that the proposed algorithm is just as accurate as these solvers, while being significantly faster, especially for large datasets. We demonstrate scalability of the proposed algorithm to up to 1,024 computing nodes and hundreds of thousands of data points using an IBM Blue Gene supercomputer. As far as we know, our work is the largest parallel implementation of bel...

  18. Scaling Chromosomes for an Evolutionary Karyotype: A Chromosomal Tradeoff between Size and Number across Woody Species.

    Science.gov (United States)

    Liang, Guolu; Chen, Hong

    2015-01-01

    This study aims to examine the expected scaling relationships between chromosome size and number across woody species and to clarify the importance of the scaling for the maintenance of chromosome diversity by analyzing the scaling at the inter- & intra-chromosomal level. To achieve for the goals, chromosome trait data were extracted for 191 woody species (including 56 evergreen species and 135 deciduous species) from the available literature. Cross-species analyses revealed a tradeoff among chromosomes between chromosome size and number, demonstrating there is selective mechanism crossing chromosomes among woody species. And the explanations for the result were presented from intra- to inter-chromosome contexts that the scaling may be compromises among scale symmetry, mechanical requirements, and resource allocation across chromosomes. Therein, a 3/4 scaling pattern was observed between total chromosomes and m-chromosomes within nucleus which may imply total chromosomes may evolve from more to less. In addition, the primary evolutionary trend of karyotype and the role of m-chromosomes in the process of karyotype evolution were also discussed.

  19. The spatial meaning of Pareto's scaling exponent of city-size distribution

    CERN Document Server

    Chen, Yanguang

    2013-01-01

    The scaling exponent of a hierarchy of cities used to be regarded as a fractal parameter. The Pareto exponent was treated as the fractal dimension of size distribution of cities, while the Zipf exponent was treated as the reciprocal of the fractal dimension. However, this viewpoint is not exact. In this paper, I will present a new interpretation of the scaling exponent of rank-size distributions. The ideas from fractal measure relation and the principle of dimension consistency are employed to explore the essence of Pareto's and Zipf's scaling exponents. The Pareto exponent proved to be a ratio of the fractal dimension of a network of cities to the average dimension of city population. Accordingly, the Zipf exponent is the reciprocal of this dimension ratio. On a digital map, the Pareto exponent can be defined by the scaling relation between a map scale and the corresponding number of cities based on this scale. The cities of the United States of America in 1900, 1940, 1960, and 1980 and Indian cities in 1981...

  20. Genome-scale identification of Legionella pneumophila effectors using a machine learning approach.

    Directory of Open Access Journals (Sweden)

    David Burstein

    2009-07-01

    Full Text Available A large number of highly pathogenic bacteria utilize secretion systems to translocate effector proteins into host cells. Using these effectors, the bacteria subvert host cell processes during infection. Legionella pneumophila translocates effectors via the Icm/Dot type-IV secretion system and to date, approximately 100 effectors have been identified by various experimental and computational techniques. Effector identification is a critical first step towards the understanding of the pathogenesis system in L. pneumophila as well as in other bacterial pathogens. Here, we formulate the task of effector identification as a classification problem: each L. pneumophila open reading frame (ORF was classified as either effector or not. We computationally defined a set of features that best distinguish effectors from non-effectors. These features cover a wide range of characteristics including taxonomical dispersion, regulatory data, genomic organization, similarity to eukaryotic proteomes and more. Machine learning algorithms utilizing these features were then applied to classify all the ORFs within the L. pneumophila genome. Using this approach we were able to predict and experimentally validate 40 new effectors, reaching a success rate of above 90%. Increasing the number of validated effectors to around 140, we were able to gain novel insights into their characteristics. Effectors were found to have low G+C content, supporting the hypothesis that a large number of effectors originate via horizontal gene transfer, probably from their protozoan host. In addition, effectors were found to cluster in specific genomic regions. Finally, we were able to provide a novel description of the C-terminal translocation signal required for effector translocation by the Icm/Dot secretion system. To conclude, we have discovered 40 novel L. pneumophila effectors, predicted over a hundred additional highly probable effectors, and shown the applicability of machine

  1. Flow stress and tribology size effects in scaled down cylinder compression

    Institute of Scientific and Technical Information of China (English)

    GUO Bin; GONG Feng; WANG Chun-ju; SHAN De-bin

    2009-01-01

    Microforming is an effective method to manufacture small metal parts. However, macro forming can not be transferred to microforming directly because of size effects. Flow stress and tribology size effects were studied. Scaled down copper T2 cylinder compression was carried out with the lubrication of castor oil and without lubrication. The results show that the flow stress decreases with decreasing the initial specimen diameter in both lubrication conditions, and the flow stress decreases by 30 MPa with the initial specimen diameter decreasing from 8 mm to 1 mm. The friction factor increases obviously with decreasing the initial specimen diameter in the case of lubricating with castor oil, and the friction factor increases by 0.11 with the initial specimen diameter decreasing from 8mm to 1mm. However, the tribology size effect is not found in the case without lubrication. The reasons of the flow stress and tribology size effects were also discussed.

  2. Settlement-Size Scaling among Prehistoric Hunter-Gatherer Settlement Systems in the New World.

    Science.gov (United States)

    Haas, W Randall; Klink, Cynthia J; Maggard, Greg J; Aldenderfer, Mark S

    2015-01-01

    Settlement size predicts extreme variation in the rates and magnitudes of many social and ecological processes in human societies. Yet, the factors that drive human settlement-size variation remain poorly understood. Size variation among economically integrated settlements tends to be heavy tailed such that the smallest settlements are extremely common and the largest settlements extremely large and rare. The upper tail of this size distribution is often formalized mathematically as a power-law function. Explanations for this scaling structure in human settlement systems tend to emphasize complex socioeconomic processes including agriculture, manufacturing, and warfare-behaviors that tend to differentially nucleate and disperse populations hierarchically among settlements. But, the degree to which heavy-tailed settlement-size variation requires such complex behaviors remains unclear. By examining the settlement patterns of eight prehistoric New World hunter-gatherer settlement systems spanning three distinct environmental contexts, this analysis explores the degree to which heavy-tailed settlement-size scaling depends on the aforementioned socioeconomic complexities. Surprisingly, the analysis finds that power-law models offer plausible and parsimonious statistical descriptions of prehistoric hunter-gatherer settlement-size variation. This finding reveals that incipient forms of hierarchical settlement structure may have preceded socioeconomic complexity in human societies and points to a need for additional research to explicate how mobile foragers came to exhibit settlement patterns that are more commonly associated with hierarchical organization. We propose that hunter-gatherer mobility with preferential attachment to previously occupied locations may account for the observed structure in site-size variation.

  3. Settlement-Size Scaling among Prehistoric Hunter-Gatherer Settlement Systems in the New World.

    Directory of Open Access Journals (Sweden)

    W Randall Haas

    Full Text Available Settlement size predicts extreme variation in the rates and magnitudes of many social and ecological processes in human societies. Yet, the factors that drive human settlement-size variation remain poorly understood. Size variation among economically integrated settlements tends to be heavy tailed such that the smallest settlements are extremely common and the largest settlements extremely large and rare. The upper tail of this size distribution is often formalized mathematically as a power-law function. Explanations for this scaling structure in human settlement systems tend to emphasize complex socioeconomic processes including agriculture, manufacturing, and warfare-behaviors that tend to differentially nucleate and disperse populations hierarchically among settlements. But, the degree to which heavy-tailed settlement-size variation requires such complex behaviors remains unclear. By examining the settlement patterns of eight prehistoric New World hunter-gatherer settlement systems spanning three distinct environmental contexts, this analysis explores the degree to which heavy-tailed settlement-size scaling depends on the aforementioned socioeconomic complexities. Surprisingly, the analysis finds that power-law models offer plausible and parsimonious statistical descriptions of prehistoric hunter-gatherer settlement-size variation. This finding reveals that incipient forms of hierarchical settlement structure may have preceded socioeconomic complexity in human societies and points to a need for additional research to explicate how mobile foragers came to exhibit settlement patterns that are more commonly associated with hierarchical organization. We propose that hunter-gatherer mobility with preferential attachment to previously occupied locations may account for the observed structure in site-size variation.

  4. Filling schemes at submicron scale: Development of submicron sized plasmonic colour filters

    Science.gov (United States)

    Rajasekharan, Ranjith; Balaur, Eugeniu; Minovich, Alexander; Collins, Sean; James, Timothy D.; Djalalian-Assl, Amir; Ganesan, Kumaravelu; Tomljenovic-Hanic, Snjezana; Kandasamy, Sasikaran; Skafidas, Efstratios; Neshev, Dragomir N.; Mulvaney, Paul; Roberts, Ann; Prawer, Steven

    2014-09-01

    The pixel size imposes a fundamental limit on the amount of information that can be displayed or recorded on a sensor. Thus, there is strong motivation to reduce the pixel size down to the nanometre scale. Nanometre colour pixels cannot be fabricated by simply downscaling current pixels due to colour cross talk and diffraction caused by dyes or pigments used as colour filters. Colour filters based on plasmonic effects can overcome these difficulties. Although different plasmonic colour filters have been demonstrated at the micron scale, there have been no attempts so far to reduce the filter size to the submicron scale. Here, we present for the first time a submicron plasmonic colour filter design together with a new challenge - pixel boundary errors at the submicron scale. We present simple but powerful filling schemes to produce submicron colour filters, which are free from pixel boundary errors and colour cross- talk, are polarization independent and angle insensitive, and based on LCD compatible aluminium technology. These results lay the basis for the development of submicron pixels in displays, RGB-spatial light modulators, liquid crystal over silicon, Google glasses and pico-projectors.

  5. Finite Size Scaling of the Higgs-Yukawa Model near the Gaussian Fixed Point

    CERN Document Server

    Chu, David Y -J; Knippschild, Bastian; Lin, C -J David; Nagy, Attila

    2016-01-01

    We study the scaling properties of Higgs-Yukawa models. Using the technique of Finite-Size Scaling, we are able to derive scaling functions that describe the observables of the model in the vicinity of a Gaussian fixed point. A feasibility study of our strategy is performed for the pure scalar theory in the weak-coupling regime. Choosing the on-shell renormalisation scheme gives us an advantage to fit the scaling functions against lattice data with only a small number of fit parameters. These formulae can be used to determine the universality of the observed phase transitions, and thus play an essential role in future investigations of Higgs-Yukawa models, in particular in the strong Yukawa coupling region.

  6. Development and Validation of the Body Size Scale for Assessing Body Weight Perception in African Populations

    Science.gov (United States)

    Cohen, Emmanuel; Bernard, Jonathan Y.; Ponty, Amandine; Ndao, Amadou; Amougou, Norbert; Saïd-Mohamed, Rihlat; Pasquet, Patrick

    2015-01-01

    Background The social valorisation of overweight in African populations could promote high-risk eating behaviours and therefore become a risk factor of obesity. However, existing scales to assess body image are usually not accurate enough to allow comparative studies of body weight perception in different African populations. This study aimed to develop and validate the Body Size Scale (BSS) to estimate African body weight perception. Methods Anthropometric measures of 80 Cameroonians and 81 Senegalese were used to evaluate three criteria of adiposity: body mass index (BMI), overall percentage of fat, and endomorphy (fat component of the somatotype). To develop the BSS, the participants were photographed in full face and profile positions. Models were selected for their representativeness of the wide variability in adiposity with a progressive increase along the scale. Then, for the validation protocol, participants self-administered the BSS to assess self-perceived current body size (CBS), desired body size (DBS) and provide a “body self-satisfaction index.” This protocol included construct validity, test-retest reliability and convergent validity and was carried out with three independent samples of respectively 201, 103 and 1115 Cameroonians. Results The BSS comprises two sex-specific scales of photos of 9 models each, and ordered by increasing adiposity. Most participants were able to correctly order the BSS by increasing adiposity, using three different words to define body size. Test-retest reliability was consistent in estimating CBS, DBS and the “body self-satisfaction index.” The CBS was highly correlated to the objective BMI, and two different indexes assessed with the BSS were consistent with declarations obtained in interviews. Conclusion The BSS is the first scale with photos of real African models taken in both full face and profile and representing a wide and representative variability in adiposity. The validation protocol proved its

  7. Development and Validation of the Body Size Scale for Assessing Body Weight Perception in African Populations.

    Directory of Open Access Journals (Sweden)

    Emmanuel Cohen

    Full Text Available The social valorisation of overweight in African populations could promote high-risk eating behaviours and therefore become a risk factor of obesity. However, existing scales to assess body image are usually not accurate enough to allow comparative studies of body weight perception in different African populations. This study aimed to develop and validate the Body Size Scale (BSS to estimate African body weight perception.Anthropometric measures of 80 Cameroonians and 81 Senegalese were used to evaluate three criteria of adiposity: body mass index (BMI, overall percentage of fat, and endomorphy (fat component of the somatotype. To develop the BSS, the participants were photographed in full face and profile positions. Models were selected for their representativeness of the wide variability in adiposity with a progressive increase along the scale. Then, for the validation protocol, participants self-administered the BSS to assess self-perceived current body size (CBS, desired body size (DBS and provide a "body self-satisfaction index." This protocol included construct validity, test-retest reliability and convergent validity and was carried out with three independent samples of respectively 201, 103 and 1115 Cameroonians.The BSS comprises two sex-specific scales of photos of 9 models each, and ordered by increasing adiposity. Most participants were able to correctly order the BSS by increasing adiposity, using three different words to define body size. Test-retest reliability was consistent in estimating CBS, DBS and the "body self-satisfaction index." The CBS was highly correlated to the objective BMI, and two different indexes assessed with the BSS were consistent with declarations obtained in interviews.The BSS is the first scale with photos of real African models taken in both full face and profile and representing a wide and representative variability in adiposity. The validation protocol proved its reliability for estimating body weight

  8. Gear fault diagnosis under variable conditions with intrinsic time-scale decomposition-singular value decomposition and support vector machine

    Energy Technology Data Exchange (ETDEWEB)

    Xing, Zhanqiang; Qu, Jianfeng; Chai, Yi; Tang, Qiu; Zhou, Yuming [Chongqing University, Chongqing (China)

    2017-02-15

    The gear vibration signal is nonlinear and non-stationary, gear fault diagnosis under variable conditions has always been unsatisfactory. To solve this problem, an intelligent fault diagnosis method based on Intrinsic time-scale decomposition (ITD)-Singular value decomposition (SVD) and Support vector machine (SVM) is proposed in this paper. The ITD method is adopted to decompose the vibration signal of gearbox into several Proper rotation components (PRCs). Subsequently, the singular value decomposition is proposed to obtain the singular value vectors of the proper rotation components and improve the robustness of feature extraction under variable conditions. Finally, the Support vector machine is applied to classify the fault type of gear. According to the experimental results, the performance of ITD-SVD exceeds those of the time-frequency analysis methods with EMD and WPT combined with SVD for feature extraction, and the classifier of SVM outperforms those for K-nearest neighbors (K-NN) and Back propagation (BP). Moreover, the proposed approach can accurately diagnose and identify different fault types of gear under variable conditions.

  9. Machine learning for large-scale wearable sensor data in Parkinson's disease: Concepts, promises, pitfalls, and futures.

    Science.gov (United States)

    Kubota, Ken J; Chen, Jason A; Little, Max A

    2016-09-01

    For the treatment and monitoring of Parkinson's disease (PD) to be scientific, a key requirement is that measurement of disease stages and severity is quantitative, reliable, and repeatable. The last 50 years in PD research have been dominated by qualitative, subjective ratings obtained by human interpretation of the presentation of disease signs and symptoms at clinical visits. More recently, "wearable," sensor-based, quantitative, objective, and easy-to-use systems for quantifying PD signs for large numbers of participants over extended durations have been developed. This technology has the potential to significantly improve both clinical diagnosis and management in PD and the conduct of clinical studies. However, the large-scale, high-dimensional character of the data captured by these wearable sensors requires sophisticated signal processing and machine-learning algorithms to transform it into scientifically and clinically meaningful information. Such algorithms that "learn" from data have shown remarkable success in making accurate predictions for complex problems in which human skill has been required to date, but they are challenging to evaluate and apply without a basic understanding of the underlying logic on which they are based. This article contains a nontechnical tutorial review of relevant machine-learning algorithms, also describing their limitations and how these can be overcome. It discusses implications of this technology and a practical road map for realizing the full potential of this technology in PD research and practice. © 2016 International Parkinson and Movement Disorder Society.

  10. Temperature and zooplankton size structure: climate control and basin-scale comparison in the North Pacific.

    Science.gov (United States)

    Chiba, Sanae; Batten, Sonia D; Yoshiki, Tomoko; Sasaki, Yuka; Sasaoka, Kosei; Sugisaki, Hiroya; Ichikawa, Tadafumi

    2015-02-01

    The global distribution of zooplankton community structure is known to follow latitudinal temperature gradients: larger species in cooler, higher latitudinal regions. However, interspecific relationships between temperature and size in zooplankton communities have not been fully examined in terms of temporal variation. To re-examine the relationship on a temporal scale and the effects of climate control thereon, we investigated the variation in copepod size structure in the eastern and western subarctic North Pacific in 2000-2011. This report presents the first basin-scale comparison of zooplankton community changes in the North Pacific based on a fully standardized data set obtained from the Continuous Plankton Recorder (CPR) survey. We found an increase in copepod community size (CCS) after 2006-2007 in the both regions because of the increased dominance of large cold-water species. Sea surface temperature varied in an east-west dipole manner, showing the typical Pacific Decadal Oscillation pattern: cooling in the east and warming in the west after 2006-2007. The observed positive correlation between CCS and sea surface temperature in the western North Pacific was inconsistent with the conventional interspecific temperature-size relationship. We explained this discrepancy by the geographical shift of the upper boundary of the thermal niche, the 9°C isotherm, of large cold-water species. In the eastern North Pacific, the boundary stretched northeast, to cover a large part of the sampling area after 2006-2007. In contrast, in the western North Pacific, the isotherm location hardly changed and the sampling area remained within its thermal niche throughout the study period, despite the warming that occurred. Our study suggests that while a climate-induced basin-scale cool-warm cycle can alter copepod community size and might subsequently impact the functions of the marine ecosystem in the North Pacific, the interspecific temperature-size relationship is not

  11. Material length scales in gradient-dependent plasticity/damage and size effects: Theory and computation

    Science.gov (United States)

    Abu Al-Rub, Rashid Kamel

    Structural materials display a strong size-dependence when deformed non-uniformly into the inelastic range: smaller is stronger. This effect has important implications for an increasing number of applications in structural failure, electronics, functional coatings, composites, micro-electro-mechanical systems (MEMS), nanostructured materials, micro/nanometer fabrication technologies, etc. The mechanical behavior of these applications cannot be characterized by classical (local) continuum theories because they incorporate no, 'material length scales' and consequently predict no size effects. On the other hand, it is still not possible to perform quantum and atomistic simulations on realistic time and structures. It is therefore necessary to develop a scale-dependent continuum theory bridging the gap between the classical continuum theories and the atomistic simulations in order to be able to design the size-dependent structures of modern technology. Nonlocal rate-dependent and gradient-dependent theories of plasticity and damage are developed in this work for this purpose. We adopt a multi-scale, hierarchical thermodynamic consistent framework to construct the material constitutive relations for the scale-dependent plasticity/damage behavior. Material length scales are implicitly and explicitly introduced into the governing equations through material rate-dependency (viscosity) and coefficients of spatial higher-order gradients of one or more material state variables, respectively. The proposed framework is implemented into the commercially well-known finite element software ABAQUS. The finite element simulations of material instability problems converge to meaningful results upon further refinement of the finite element mesh, since the width of the fracture process zone (shear band) is determined by the intrinsic material length scale; while the classical continuum theories fail to address this problem. It is also shown that the proposed theory is successful for

  12. Finite-size scaling in silver nanowire films: design considerations for practical devices

    Science.gov (United States)

    Large, Matthew J.; Cann, Maria; Ogilvie, Sean P.; King, Alice A. K.; Jurewicz, Izabela; Dalton, Alan B.

    2016-07-01

    We report the first application of finite-size scaling theory to nanostructured percolating networks, using silver nanowire (AgNW) films as a model system for experiment and simulation. AgNWs have been shown to be a prime candidate for replacing Indium Tin Oxide (ITO) in applications such as capacitive touch sensing. While their performance as large area films is well-studied, the production of working devices involves patterning of the films to produce isolated electrode structures, which exhibit finite-size scaling when these features are sufficiently small. We demonstrate a generalised method for understanding this behaviour in practical rod percolation systems, such as AgNW films, and study the effect of systematic variation of the length distribution of the percolating material. We derive a design rule for the minimum viable feature size in a device pattern, relating it to parameters which can be derived from a transmittance-sheet resistance data series for the material in question. This understanding has direct implications for the industrial adoption of silver nanowire electrodes in applications where small features are required including single-layer capacitive touch sensors, LCD and OLED display panels.We report the first application of finite-size scaling theory to nanostructured percolating networks, using silver nanowire (AgNW) films as a model system for experiment and simulation. AgNWs have been shown to be a prime candidate for replacing Indium Tin Oxide (ITO) in applications such as capacitive touch sensing. While their performance as large area films is well-studied, the production of working devices involves patterning of the films to produce isolated electrode structures, which exhibit finite-size scaling when these features are sufficiently small. We demonstrate a generalised method for understanding this behaviour in practical rod percolation systems, such as AgNW films, and study the effect of systematic variation of the length distribution of

  13. Jurisdiction Size and Local Democracy: Evidence on Internal Political Efficacy from Large-scale Municipal Reform

    DEFF Research Database (Denmark)

    Lassen, David Dreyer; Serritzlew, Søren

    2011-01-01

    and problems of endogeneity. We focus on internal political efficacy, a psychological condition that many see as necessary for high-quality participatory democracy. We identify a quasiexperiment, a large-scale municipal reform in Denmark, which allows us to estimate a causal effect of jurisdiction size...... on internal political efficacy. The reform, affecting some municipalities, but not all, was implemented by the central government, and resulted in exogenous, and substantial, changes in municipal population size. Based on survey data collected before and after the reform, we find, using various difference...

  14. Optimal scaling of average queue sizes in an input-queued switch: an open problem

    OpenAIRE

    Shah, Devavrat; Tsitsiklis, John N.; Zhong, Yuan

    2011-01-01

    We review some known results and state a few versions of an open problem related to the scaling of the total queue size (in steady state) in an n×n input-queued switch, as a function of the port number n and the load factor ρ. Loosely speaking, the question is whether the total number of packets in queue, under either the maximum weight policy or under an optimal policy, scales (ignoring any logarithmic factors) as O(n/(1 − ρ)).

  15. Origin of sample size effect: Stochastic dislocation formation in crystalline metals at small scales

    Science.gov (United States)

    Huang, Guan-Rong; Huang, J. C.; Tsai, W. Y.

    2016-12-01

    In crystalline metals at small scales, the dislocation density will be increased by stochastic events of dislocation network, leading to a universal power law for various material structures. In this work, we develop a model obeyed by a probability distribution of dislocation density to describe the dislocation formation in terms of a chain reaction. The leading order terms of steady-state of probability distribution gives physical and quantitative insight to the scaling exponent n values in the power law of sample size effect. This approach is found to be consistent with experimental n values in a wide range.

  16. Turbulent Concentration of MM-Size Particles in the Protoplanetary Nebula: Scaled-Dependent Multiplier Functions

    Science.gov (United States)

    Cuzzi, Jeffrey N.; Hartlep, Thomas; Weston, B.; Estremera, Shariff Kareem

    2014-01-01

    The initial accretion of primitive bodies (asteroids and TNOs) from freely-floating nebula particles remains problematic. Here we focus on the asteroids where constituent particle (read "chondrule") sizes are observationally known; similar arguments will hold for TNOs, but the constituent particles in those regions will be smaller, or will be fluffy aggregates, and are unobserved. Traditional growth-bysticking models encounter a formidable "meter-size barrier" [1] (or even a mm-cm-size barrier [2]) in turbulent nebulae, while nonturbulent nebulae form large asteroids too quickly to explain long spreads in formation times, or the dearth of melted asteroids [3]. Even if growth by sticking could somehow breach the meter size barrier, other obstacles are encountered through the 1-10km size range [4]. Another clue regarding planetesimal formation is an apparent 100km diameter peak in the pre-depletion, pre-erosion mass distribution of asteroids [5]; scenarios leading directly from independent nebula particulates to this size, which avoid the problematic m-km size range, could be called "leapfrog" scenarios [6-8]. The leapfrog scenario we have studied in detail involves formation of dense clumps of aerodynamically selected, typically mm-size particles in turbulence, which can under certain conditions shrink inexorably on 100-1000 orbit timescales and form 10-100km diameter sandpile planetesimals. The typical sizes of planetesimals and the rate of their formation [7,8] are determined by a statistical model with properties inferred from large numerical simulations of turbulence [9]. Nebula turbulence can be described by its Reynolds number Re = L/eta sup(4/3), where L = ETA alpha sup (1/2) the largest eddy scale, H is the nebula gas vertical scale height, and a the nebula turbulent viscosity parameter, and ? is the Kolmogorov or smallest scale in turbulence (typically about 1km), with eddy turnover time t?. In the nebula, Re is far larger than any numerical simulation can

  17. Turbulent Concentration of mm-Size Particles in the Protoplanetary Nebula: Scale-Dependent Cascades

    Science.gov (United States)

    Cuzzi, J. N.; Hartlep, T.

    2015-01-01

    The initial accretion of primitive bodies (here, asteroids in particular) from freely-floating nebula particles remains problematic. Traditional growth-by-sticking models encounter a formidable "meter-size barrier" (or even a mm-to-cm-size barrier) in turbulent nebulae, making the preconditions for so-called "streaming instabilities" difficult to achieve even for so-called "lucky" particles. Even if growth by sticking could somehow breach the meter size barrier, turbulent nebulae present further obstacles through the 1-10km size range. On the other hand, nonturbulent nebulae form large asteroids too quickly to explain long spreads in formation times, or the dearth of melted asteroids. Theoretical understanding of nebula turbulence is itself in flux; recent models of MRI (magnetically-driven) turbulence favor low-or- no-turbulence environments, but purely hydrodynamic turbulence is making a comeback, with two recently discovered mechanisms generating robust turbulence which do not rely on magnetic fields at all. An important clue regarding planetesimal formation is an apparent 100km diameter peak in the pre-depletion, pre-erosion mass distribution of asteroids; scenarios leading directly from independent nebula particulates to large objects of this size, which avoid the problematic m-km size range, could be called "leapfrog" scenarios. The leapfrog scenario we have studied in detail involves formation of dense clumps of aerodynamically selected, typically mm-size particles in turbulence, which can under certain conditions shrink inexorably on 100-1000 orbit timescales and form 10-100km diameter sandpile planetesimals. There is evidence that at least the ordinary chondrite parent bodies were initially composed entirely of a homogeneous mix of such particles. Thus, while they are arcane, turbulent concentration models acting directly on chondrule size particles are worthy of deeper study. The typical sizes of planetesimals and the rate of their formation can be

  18. Allometric scaling in the dentition of primates and prediction of body weight from tooth size in fossils.

    Science.gov (United States)

    Gingerich, P D; Smith, B H; Rosenberg, K

    1982-05-01

    Tooth size varies exponentially with body weight in primates. Logarithmic transformation of tooth crown area and body weight yields a linear model of slope 0.67 as an isometric (geometric) baseline for study of dental allometry. This model is compared with that predicted by metabolic scaling (slope = 0.75). Tarsius and other insectivores have larger teeth for their body size than generalized primates do and they are not included in this analysis. Among generalized primates, tooth size is highly correlated with body size. Correlations of upper and lower cheek teeth with body size range from 0.90-0.97, depending on tooth position. Central cheek teeth (P44 and M11) have allometric coefficients ranging from 0.57-0.65, falling well below geometric scaling. Anterior and posterior cheek teeth scale at or above metabolic scaling. Considered individually or as a group, upper cheek teeth scale allometrically with lower coefficients than corresponding lower cheek teeth; the reverse is true for incisors. The sum of crown areas for all upper cheek teeth scales significantly below geometric scaling, while the sum of crown areas for all lower cheek teeth approximates geometric scaling. Tooth size can be used to predict the body weight of generalized fossil primates. This is illustrated for Aegyptopithecus and other Eocene, Oligocene, and miocene primates. Regressions based on tooth size in generalized primates yield reasonable estimates of body weight, but much remains to be learned about tooth size and body size scaling in more restricted systematic groups and dietary guilds.

  19. Steady-state numerical modeling of size effects in micron scale wire drawing

    DEFF Research Database (Denmark)

    Juul, Kristian Jørgensen; Nielsen, Kim Lau; Niordson, Christian Frithiof

    2017-01-01

    Wire drawing processes at the micron scale have received increased interest as micro wires are increasingly required in electrical components. It is well-established that size effects due to large strain gradient effects play an important role at this scale and the present study aims to quantify...... these effects for the wire drawing process. Focus will be on investigating the impact of size effects on the most favourable tool geometry (in terms of minimizing the drawing force) for various conditions between the wire/tool interface. The numerical analysis is based on a steady-state framework that enables...... convergence without dealing with the transient regime, but still fully accounts for the history dependence as-well as the elastic unloading. Thus, it forms the basis for a comprehensive parameter study. During the deformation process in wire drawing, large plastic strain gradients evolve in the contact region...

  20. Weakest-Link Scaling and Finite Size Effects on Recurrence Times Distribution

    CERN Document Server

    Hristopulos, Dionissios T; Kaniadakis, Giorgio

    2013-01-01

    Tectonic earthquakes result from the fracturing of the Earth's crust due to the loading induced by the motion of the tectonic plates. Hence, the statistical laws of earthquakes must be intimately connected to the statistical laws of fracture. The Weibull distribution is a commonly used model of earthquake recurrence times (ERT). Nevertheless, deviations from Weibull scaling have been observed in ERT data and in fracture experiments on quasi-brittle materials. We propose that the weakest-link-scaling theory for finite-size systems leads to the kappa-Weibull function, which implies a power-law tail for the ERT distribution. We show that the ERT hazard rate function decreases linearly after a waiting time which is proportional to the system size (in terms of representative volume elements) raised to the inverse of the Weibull modulus. We also demonstrate that the kappa-Weibull can be applied to strongly correlated systems by means of simulations of a fiber bundle model.

  1. Estimation and scaling of hydrostratigraphic units: application of unsupervised machine learning and multivariate statistical techniques to hydrogeophysical data

    Science.gov (United States)

    Friedel, Michael J.

    2016-08-01

    Numerical models provide a way to evaluate groundwater systems, but determining the hydrostratigraphic units (HSUs) used in constructing these models remains subjective, nonunique, and uncertain. A three-step machine-learning approach is proposed in which fusion, estimation, and clustering operations are performed on different data sets to arrive at HSUs at different scales. In step one, data fusion is performed by training a self-organizing map (SOM) with sparse borehole hydrogeologic (lithology, hydraulic conductivity, aqueous field parameters, dissolved constituents) and geophysical (gamma, spontaneous potential, and resistivity) measurements. Estimation is handled by iterative least-squares minimization of the SOM quantization and topographical errors. Application of the Davies-Bouldin criteria to k-means clustering of SOM nodes is used to determine the number and location of discontinuous borehole HSUs with low lateral density (based on borehole spacing at 100 s m) and high vertical density (based on cm-scale logging). In step two, a scaling network is trained using the estimated borehole HSUs, airborne electromagnetic measurements, and numerically inverted resistivity profiles. In step three, independent airborne electromagnetic measurements are applied to the scaling network, and the estimation performed to arrive at a set of continuous HSUs with high lateral density (based on sounding locations at meter (m) spacing) and medium vertical density (based on m-layer modeled structure). Performance metrics are used to evaluate each step of the approach. Efficacy of the proposed approach is demonstrated to map local-to-regional scale HSUs using hydrogeophysical data collected at a heterogeneous surficial aquifer in northwestern Nebraska, USA.

  2. Estimation and scaling of hydrostratigraphic units: application of unsupervised machine learning and multivariate statistical techniques to hydrogeophysical data

    Science.gov (United States)

    Friedel, Michael J.

    2016-12-01

    Numerical models provide a way to evaluate groundwater systems, but determining the hydrostratigraphic units (HSUs) used in constructing these models remains subjective, nonunique, and uncertain. A three-step machine-learning approach is proposed in which fusion, estimation, and clustering operations are performed on different data sets to arrive at HSUs at different scales. In step one, data fusion is performed by training a self-organizing map (SOM) with sparse borehole hydrogeologic (lithology, hydraulic conductivity, aqueous field parameters, dissolved constituents) and geophysical (gamma, spontaneous potential, and resistivity) measurements. Estimation is handled by iterative least-squares minimization of the SOM quantization and topographical errors. Application of the Davies-Bouldin criteria to k-means clustering of SOM nodes is used to determine the number and location of discontinuous borehole HSUs with low lateral density (based on borehole spacing at 100 s m) and high vertical density (based on cm-scale logging). In step two, a scaling network is trained using the estimated borehole HSUs, airborne electromagnetic measurements, and numerically inverted resistivity profiles. In step three, independent airborne electromagnetic measurements are applied to the scaling network, and the estimation performed to arrive at a set of continuous HSUs with high lateral density (based on sounding locations at meter (m) spacing) and medium vertical density (based on m-layer modeled structure). Performance metrics are used to evaluate each step of the approach. Efficacy of the proposed approach is demonstrated to map local-to-regional scale HSUs using hydrogeophysical data collected at a heterogeneous surficial aquifer in northwestern Nebraska, USA.

  3. Human-Machine Cooperation in Large-Scale Multimedia Retrieval: A Survey

    Science.gov (United States)

    Shirahama, Kimiaki; Grzegorzek, Marcin; Indurkhya, Bipin

    2015-01-01

    "Large-Scale Multimedia Retrieval" (LSMR) is the task to fast analyze a large amount of multimedia data like images or videos and accurately find the ones relevant to a certain semantic meaning. Although LSMR has been investigated for more than two decades in the fields of multimedia processing and computer vision, a more…

  4. The Size, Scale, and Structure Concept Inventory (S3CI) for Astronomy

    Science.gov (United States)

    Gingrich, E. C.; Ladd, E. F.; Nottis, K. E. K.; Udomprasert, P.; Goodman, A. A.

    2015-11-01

    We present a concept inventory to evaluate student understanding of size, scale, and structure concepts in the astronomical context. Students harbor misconceptions regarding these concepts, and these misconceptions often persist even after instruction. Evaluation of these concepts prior to as well as after instruction can ensure misconceptions are addressed. Currently, no concept inventories focus exclusively on these geometrical ideas, so we have developed the Size, Scale and Structure Concept Inventory (S3CI). In fall 2013, we piloted a 24-item version of the S3CI in an introductory astronomy course at a small private university. We performed an item analysis and estimated the internal consistency reliability for the instrument. Based on these analyses, problematic questions were revised for a second version. We discuss the results from the pilot phase and preview our updated test in this work. A valid and reliable concept inventory has the potential to accurately evaluate undergraduates' understanding of size, scale, and structure concepts in the astronomical context, as well as assess conceptual change after targeted instruction. Lessons learned in the evaluation of the initial version of the S3CI can guide future development of this and other astronomical concept inventories. Instructors interested in participating in the ongoing development of the S3CI should contact the authors.

  5. INCREASING RETURNS TO SCALE, DYNAMICS OF INDUSTRIAL STRUCTURE AND SIZE DISTRIBUTION OF FIRMS

    Institute of Scientific and Technical Information of China (English)

    Ying FAN; Menghui LI; Zengru DI

    2006-01-01

    A multi-agent model is presented to discuss the market dynamics and the size distribution of firms.The model emphasizes the effects of increasing returns to scale and gives the description of the born and death of adaptive producers. The evolution of market structure and its behavior under the technological shocks are investigated. Its dynamical results are in good agreement with some empirical "stylized facts" of industrial evolution. With the diversity of demand and adaptive growth strategies of firms, the firm size in the generalized model obeys the power-law distribution. Three factors mainly determine the competitive dynamics and the skewed size distributions of firms: 1. Self-reinforcing mechanism; 2. Adaptive firm growing strategies; 3. Demand diversity or widespread heterogeneity in the technological capabilities of firms.

  6. Scale effects between body size and limb design in quadrupedal mammals.

    Science.gov (United States)

    Kilbourne, Brandon M; Hoffman, Louwrens C

    2013-01-01

    Recently the metabolic cost of swinging the limbs has been found to be much greater than previously thought, raising the possibility that limb rotational inertia influences the energetics of locomotion. Larger mammals have a lower mass-specific cost of transport than smaller mammals. The scaling of the mass-specific cost of transport is partly explained by decreasing stride frequency with increasing body size; however, it is unknown if limb rotational inertia also influences the mass-specific cost of transport. Limb length and inertial properties--limb mass, center of mass (COM) position, moment of inertia, radius of gyration, and natural frequency--were measured in 44 species of terrestrial mammals, spanning eight taxonomic orders. Limb length increases disproportionately with body mass via positive allometry (length ∝ body mass(0.40)); the positive allometry of limb length may help explain the scaling of the metabolic cost of transport. When scaled against body mass, forelimb inertial properties, apart from mass, scale with positive allometry. Fore- and hindlimb mass scale according to geometric similarity (limb mass ∝ body mass(1.0)), as do the remaining hindlimb inertial properties. The positive allometry of limb length is largely the result of absolute differences in limb inertial properties between mammalian subgroups. Though likely detrimental to locomotor costs in large mammals, scale effects in limb inertial properties appear to be concomitant with scale effects in sensorimotor control and locomotor ability in terrestrial mammals. Across mammals, the forelimb's potential for angular acceleration scales according to geometric similarity, whereas the hindlimb's potential for angular acceleration scales with positive allometry.

  7. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning.

    Science.gov (United States)

    Liu, Yang; Yang, Jie; Huang, Yuan; Xu, Lixiong; Li, Siguang; Qi, Man

    2015-01-01

    Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.

  8. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning

    OpenAIRE

    Yang Liu; Jie Yang; Yuan Huang; Lixiong Xu; Siguang Li; Man Qi

    2015-01-01

    Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing mo...

  9. Finite-size scaling in a 2D disordered electron gas with spectral nodes.

    Science.gov (United States)

    Sinner, Andreas; Ziegler, Klaus

    2016-08-03

    We study the DC conductivity of a weakly disordered 2D electron gas with two bands and spectral nodes, employing the field theoretical version of the Kubo-Greenwood conductivity formula. Disorder scattering is treated within the standard perturbation theory by summing up ladder and maximally crossed diagrams. The emergent gapless (diffusion) modes determine the behavior of the conductivity on large scales. We find a finite conductivity with an intermediate logarithmic finite-size scaling towards smaller conductivities but do not obtain the logarithmic divergence of the weak-localization approach. Our results agree with the experimentally observed logarithmic scaling of the conductivity in graphene with the formation of a plateau near [Formula: see text].

  10. Dependence of exponents on text length versus finite-size scaling for word-frequency distributions

    Science.gov (United States)

    Corral, Álvaro; Font-Clos, Francesc

    2017-08-01

    Some authors have recently argued that a finite-size scaling law for the text-length dependence of word-frequency distributions cannot be conceptually valid. Here we give solid quantitative evidence for the validity of this scaling law, using both careful statistical tests and analytical arguments based on the generalized central-limit theorem applied to the moments of the distribution (and obtaining a novel derivation of Heaps' law as a by-product). We also find that the picture of word-frequency distributions with power-law exponents that decrease with text length [X. Yan and P. Minnhagen, Physica A 444, 828 (2016), 10.1016/j.physa.2015.10.082] does not stand with rigorous statistical analysis. Instead, we show that the distributions are perfectly described by power-law tails with stable exponents, whose values are close to 2, in agreement with the classical Zipf's law. Some misconceptions about scaling are also clarified.

  11. Yield scaling, size hierarchy and fluctuations of observables in fragmentation of excited heavy nuclei

    CERN Document Server

    Neindre, N Le; Wieleczko, J P; Borderie, B; Gulminelli, F; Rivet, M F; Bougault, R; Chbihi, A; Dayras, R; Frankland, J D; Galíchet, E; Guinet, D; Lautesse, P; López, O; Lukasik, J; Mercier, D; Moisan, J; Pârlog, M; Rosato, E; Roy, R; Schwarz, C; Sfienti, C; Tamain, B; Trautmann, W; Trzcinski, A; Turzó, K; Vient, E; Vigilante, M; Zwieglinski, B

    2007-01-01

    Multifragmentation properties measured with INDRA are studied for single sources produced in Xe+Sn reactions in the incident energy range 32-50 A MeV and quasiprojectiles from Au+Au collisions at 80 A MeV. A comparison for both types of sources is presented concerning Fisher scaling, Zipf law, fragment size and fluctuation observables. A Fisher scaling is observed for all the data. The pseudo-critical energies extracted from the Fisher scaling are consistent between Xe+Sn central collisions and Au quasi-projectiles. In the latter case it also corresponds to the energy region at which fluctuations are maximal. The critical energies deduced from the Zipf analysis are higher than those from the Fisher analysis.

  12. Finite-size scaling in a 2D disordered electron gas with spectral nodes

    Science.gov (United States)

    Sinner, Andreas; Ziegler, Klaus

    2016-08-01

    We study the DC conductivity of a weakly disordered 2D electron gas with two bands and spectral nodes, employing the field theoretical version of the Kubo-Greenwood conductivity formula. Disorder scattering is treated within the standard perturbation theory by summing up ladder and maximally crossed diagrams. The emergent gapless (diffusion) modes determine the behavior of the conductivity on large scales. We find a finite conductivity with an intermediate logarithmic finite-size scaling towards smaller conductivities but do not obtain the logarithmic divergence of the weak-localization approach. Our results agree with the experimentally observed logarithmic scaling of the conductivity in graphene with the formation of a plateau near {{e}2}/π h .

  13. Economies of scale and optimal size of hospitals: Empirical results for Danish public hospitals

    DEFF Research Database (Denmark)

    Kristensen, Troels

    Context and aim: The Danish hospital sector is facing a significant rebuilding programme, driven by a : The Danish hospital sector is facing a significant rebuilding programme, driven by a political desire to concentrate activity in fewer and larger hospitals. Our aim is to analyse whether...... the current configuration of Danish hospitals is subject to scale economies that may justify such plans and to estimate an optimal hospital size. Methods: We estimate cost functions using panel data on total costs, DRG-weighted casemix, and number : We estimate cost functions using panel data on total costs......, DRG-weighted casemix, and number of beds for three years from 2004-2006. A short-run cost function is used to derive estimates of long-run scale economies by applying the envelope condition. Results: We identify moderate to significant long-run economies of scale when applying two alternative We...

  14. A generic trust framework for large-scale open systems using machine learning

    OpenAIRE

    Liu, Xin; Tredan, Gilles; Datta, Anwitaman

    2011-01-01

    In many large scale distributed systems and on the web, agents need to interact with other unknown agents to carry out some tasks or transactions. The ability to reason about and assess the potential risks in carrying out such transactions is essential for providing a safe and reliable environment. A traditional approach to reason about the trustworthiness of a transaction is to determine the trustworthiness of the specific agent involved, derived from the history of its behavior. As a depart...

  15. Trends in size of tropical deforestation events signal increasing dominance of industrial-scale drivers

    Science.gov (United States)

    Austin, Kemen G.; González-Roglich, Mariano; Schaffer-Smith, Danica; Schwantes, Amanda M.; Swenson, Jennifer J.

    2017-05-01

    Deforestation continues across the tropics at alarming rates, with repercussions for ecosystem processes, carbon storage and long term sustainability. Taking advantage of recent fine-scale measurement of deforestation, this analysis aims to improve our understanding of the scale of deforestation drivers in the tropics. We examined trends in forest clearings of different sizes from 2000-2012 by country, region and development level. As tropical deforestation increased from approximately 6900 kha yr-1 in the first half of the study period, to >7900 kha yr-1 in the second half of the study period, >50% of this increase was attributable to the proliferation of medium and large clearings (>10 ha). This trend was most pronounced in Southeast Asia and in South America. Outside of Brazil >60% of the observed increase in deforestation in South America was due to an upsurge in medium- and large-scale clearings; Brazil had a divergent trend of decreasing deforestation, >90% of which was attributable to a reduction in medium and large clearings. The emerging prominence of large-scale drivers of forest loss in many regions and countries suggests the growing need for policy interventions which target industrial-scale agricultural commodity producers. The experience in Brazil suggests that there are promising policy solutions to mitigate large-scale deforestation, but that these policy initiatives do not adequately address small-scale drivers. By providing up-to-date and spatially explicit information on the scale of deforestation, and the trends in these patterns over time, this study contributes valuable information for monitoring, and designing effective interventions to address deforestation.

  16. Influence of voxel size settings in X-Ray CT Imagery of soil in scaling properties

    Science.gov (United States)

    Heck, R.; Scaiff, N. T.; Andina, D.; Tarquis, A. M.

    2012-04-01

    Fundamental to the interpretation and comparison of X-ray CT imagery of soil is recognition of the objectivity and consistency of procedures used to generate the 3D models. Notably, there has been a lack of consistency in the size of voxels used for diverse interpretations of soils features and processes; in part, this is due to the ongoing evolution of instrumentation and computerized image processing capacity. Moreover, there is still need for discussion on whether standard voxels sizes should be recommended, and what those would be. Regardless of any eventual adoption of such standards, there is a need to also consider the manner in which voxel size is set in the 3D imagery. In the typical approaches to X-ray CT imaging, voxel size may be set at three stages: image acquisition (involving the position of the sample relative to the tube and detector), image reconstruction (where binning of pixels in the acquired images may occur), as well as post-reconstruction re-sampling (which may involve algorithms such as tri-cubic convolution). This research evaluates and compares the spatial distribution of intra-aggregate voids in 3D imagery as well as their scaling properties, of equivalent voxel size, generated using various combinations of the afore-mentioned approaches. Funding provided by Spanish Ministerio de Ciencia e Innovación (MICINN) through project no. AGL2010-21501/AGR is greatly appreciated.

  17. A laboratory scale approach to polymer solar cells using one coating/printing machine, flexible substrates, no ITO, no vacuum and no spincoating

    DEFF Research Database (Denmark)

    Carlé, Jon Eggert; Andersen, Thomas Rieks; Helgesen, Martin

    2013-01-01

    Printing of the silver back electrode under ambient conditions using simple laboratory equipment has been the missing link to fully replace evaporated metal electrodes. Here we demonstrate how a recently developed roll coater is further developed into a single machine that enables processing of all...... layers of the polymer solar cell without moving the substrate from one machine to another. The novel approach to polymer solar cells is readily scalable using one compact laboratory scale coating/printing machine that is directly compatible with industrial and pilot scale roll-to-roll processing. The use...... of the techniques was successfully demonstrated in one continuous roll process on flexible polyethyleneterphthalate (PET) substrates and polymer solar cells were prepared by solution processing of five layers using only slot-die coating and flexographic printing. The devices obtained did not employ indium...

  18. A study of dynamic finite size scaling behavior of the scaling functions—calculation of dynamic critical index of Wolff algorithm

    Science.gov (United States)

    Gündüç, Semra; Dilaver, Mehmet; Aydın, Meral; Gündüç, Yiğit

    2005-02-01

    In this work we have studied the dynamic scaling behavior of two scaling functions and we have shown that scaling functions obey the dynamic finite size scaling rules. Dynamic finite size scaling of scaling functions opens possibilities for a wide range of applications. As an application we have calculated the dynamic critical exponent (z) of Wolff's cluster algorithm for 2-, 3- and 4-dimensional Ising models. Configurations with vanishing initial magnetization are chosen in order to avoid complications due to initial magnetization. The observed dynamic finite size scaling behavior during early stages of the Monte Carlo simulation yields z for Wolff's cluster algorithm for 2-, 3- and 4-dimensional Ising models with vanishing values which are consistent with the values obtained from the autocorrelations. Especially, the vanishing dynamic critical exponent we obtained for d=3 implies that the Wolff algorithm is more efficient in eliminating critical slowing down in Monte Carlo simulations than previously reported.

  19. Observed Multi-Decade DD and DT Z-Pinch Fusion Rate Scaling in 5 Dense Plasma Focus Fusion Machines

    Energy Technology Data Exchange (ETDEWEB)

    Hagen, E. C. [National Security Technologies, LLC; Lowe, D. R. [National Security Technologies, LLC; O' Brien, R. [University of Nevada, Las Vegas; Meehan, B. T. [National Security Technologies, LLC

    2013-06-18

    Dense Plasma Focus (DPF) machines are in use worldwide or a wide variety of applications; one of these is to produce intense, short bursts of fusion via r-Z pinch heating and compression of a working gas. We have designed and constructed a series of these, ranging from portable to a maximum energy storage capacity of 2 MJ. Fusion rates from 5 DPF pulsed fusion generators have been measured in a single laboratory using calibrated activation detectors. Measured rates range from ~ 1015 to more than 1019 fusions per second have been measured. Fusion rates from the intense short (20 – 50 ns) periods of production were inferred from measurement of neutron production using both calibrated activation detectors and scintillator-PMT neutron time of flight (NTOF) detectors. The NTOF detectors are arranged to measure neutrons versus time over flight paths of 30 Meters. Fusion rate scaling versus energy and current will be discussed. Data showing observed fusion cutoff at D-D fusion yield levels of approximately 1*1012, and corresponding tube currents of ~ 3 MA will be shown. Energy asymmetry of product neutrons will also be discussed. Data from the NTOF lines of sight have been used to measure energy asymmetries of the fusion neutrons. From this, center of mass energies for the D(d,n)3He reaction are inferred. A novel re-entrant chamber that allows extremely high single pulse neutron doses (> 109 neutrons/cm2 in 50 ns) to be supplied to samples will be described. Machine characteristics and detector types will be discussed.

  20. Synchronization in scale-free networks: The role of finite-size effects

    Science.gov (United States)

    Torres, D.; Di Muro, M. A.; La Rocca, C. E.; Braunstein, L. A.

    2015-06-01

    Synchronization problems in complex networks are very often studied by researchers due to their many applications to various fields such as neurobiology, e-commerce and completion of tasks. In particular, scale-free networks with degree distribution P(k)∼ k-λ , are widely used in research since they are ubiquitous in Nature and other real systems. In this paper we focus on the surface relaxation growth model in scale-free networks with 2.5system size N. We find a novel behavior of the fluctuations characterized by a crossover between two regimes at a value of N=N* that depends on λ: a logarithmic regime, found in previous research, and a constant regime. We propose a function that describes this crossover, which is in very good agreement with the simulations. We also find that, for a system size above N* , the fluctuations decrease with λ, which means that the synchronization of the system improves as λ increases. We explain this crossover analyzing the role of the network's heterogeneity produced by the system size N and the exponent of the degree distribution.

  1. Critical Behaviors and Finite-Size Scaling of Principal Fluctuation Modes in Complex Systems

    Science.gov (United States)

    Li, Xiao-Teng; Chen, Xiao-Song

    2016-09-01

    Complex systems consisting of N agents can be investigated from the aspect of principal fluctuation modes of agents. From the correlations between agents, an N × N correlation matrix C can be obtained. The principal fluctuation modes are defined by the eigenvectors of C. Near the critical point of a complex system, we anticipate that the principal fluctuation modes have the critical behaviors similar to that of the susceptibity. With the Ising model on a two-dimensional square lattice as an example, the critical behaviors of principal fluctuation modes have been studied. The eigenvalues of the first 9 principal fluctuation modes have been invesitigated. Our Monte Carlo data demonstrate that these eigenvalues of the system with size L and the reduced temperature t follow a finite-size scaling form λn (L, t) = Lγ/ν fn(tL1/ν), where γ is critical exponent of susceptibility and ν is the critical exponent of the correlation length. Using eigenvalues λ1, λ2 and λ6, we get the finite-size scaling form of the second moment correlation length ξ (L, t) &equals L\\tilde ξ (tL1/ν ). It is shown that the second moment correlation length in the two-dimensional square lattice is anisotropic. Supported by the National Natural Science Foundation of China under Grant Nos. 11121403 and 11504384

  2. The scaling relationship between telescope cost and aperture size for very large telescopes

    Science.gov (United States)

    van Belle, Gerard T.; Meinel, Aden Baker; Meinel, Marjorie Pettit

    2004-01-01

    Cost data for ground-based telescopes of the last century are analyzed for trends in the relationship between aperture size and cost. We find that for apertures built prior to 1980, costs scaled as aperture size to the 2.8 power, which is consistent with the precious finding of Meinel (1978). After 1980, 'traditional' monolithic mirror telescope costs have scaled as aperture to the 2.5 power. The large multiple mirror telescopes built or in construction during this time period (Keck, LBT, GTC) appear to deviate from this relationship with significant cost savings as a result, although it is unclear what power law such structures follow. We discuss the implications of the current cost-aperture size data on the proposed large telescope projects of the next ten to twenty years. Structures that naturally tend towards the 2.0 power in the cost-aperture relationship will be the favorable choice for future extremely large apertures; out expectation is that space-based structures will ultimately gain economic advantage over ground-based ones.

  3. 小型薄板类零件加工工艺研究%Study on Machining Process of Small-sized Sheet Metal Parts

    Institute of Scientific and Technical Information of China (English)

    许超; 赵华

    2015-01-01

    Small-sized sheet metal parts are widely used in industries, but there exists some shortcomings such as poor rigidity, low strength, clamping deformation, low machining efficiency. Taking a typical small-sized sheet metal part as an example, after analyzing the traditional process, some means of improvement is given in many aspects such as block selection, machining process, workpiece clamping and so on. After a period of application in workshop, the new machining process has obtained good results in many aspects such as reducing production time, reducing work intensity, improving the rate of qualification, which greatly reduces the overall costs of the workpieces.%小型薄板类零件是工业中经常使用的零件,但是其刚性差、强度低、装夹易变形、加工效率低下。针对车间某一典型小型薄板类零件,对传统工艺进行分析,对工艺进行改进,在毛坯选择、加工思路、工件装夹等多个方面都进行了创新。经过实践证明,该加工工艺在工时的降低、工人劳动强度的减小、合格率的提高等多个方面都取得了很好的效果,大大降低了工件的综合成本。

  4. Synchronization in Scale Free networks: The role of finite size effects

    CERN Document Server

    Torres, Débora; La Rocca, Cristian E; Braunstein, Lidia A

    2015-01-01

    Synchronization problems in complex networks are very often studied by researchers due to its many applications to various fields such as neurobiology, e-commerce and completion of tasks. In particular, Scale Free networks with degree distribution $P(k)\\sim k^{-\\lambda}$, are widely used in research since they are ubiquitous in nature and other real systems. In this paper we focus on the surface relaxation growth model in Scale Free networks with $2.5< \\lambda <3$, and study the scaling behavior of the fluctuations, in the steady state, with the system size $N$. We find a novel behavior of the fluctuations characterized by a crossover between two regimes at a value of $N=N^*$ that depends on $\\lambda$: a logarithmic regime, found in previous research, and a constant regime. We propose a function that describes this crossover, which is in very good agreement with the simulations. We also find that, for a system size above $N^{*}$, the fluctuations decrease with $\\lambda$, which means that the synchroniza...

  5. Application of variable-angle belt scale on bucket wheel machine%变角度电子皮带秤在斗轮机上的应用

    Institute of Scientific and Technical Information of China (English)

    徐红义; 毛志平; 鲁朝阳; 余松青; 许峰; 薛丁富

    2016-01-01

    This paper introduces the mathematical model of variable-angle electronic belt scale on Bucket wheel machine. Develop an belt scale that can automatically correct Span and Zero of Accumulator with the angle changing of belt conveyer. Improve Metrology accuracy of belt scale on Bucket wheel machine. Improve electrical belt scale accuracy installed on Bucket wheel machine. Ensure accuracy of coal blending, increase coal catch efficiency, energy conservation and environmental protection and decrees mechanical abrasion, increase durable years of Bucket wheel machine.%本文针对斗轮机输送机工况,深入研究电子皮带秤受力变化的数学模型,研制一套在斗轮机的悬臂角度改变后能够及时自动修正仪表的量程系数和零点的变角度电子皮带秤,提高安装在斗轮机上的电子皮带秤的计量准确度,保证了准确配煤,有效提高取料效率,节能环保,同时减少了与取料系统有关的传输机械设备的磨损,延长了斗轮机的使用年限。

  6. To address surface reaction network complexity using scaling relations machine learning and DFT calculations

    Science.gov (United States)

    Ulissi, Zachary W.; Medford, Andrew J.; Bligaard, Thomas; Nørskov, Jens K.

    2017-01-01

    Surface reaction networks involving hydrocarbons exhibit enormous complexity with thousands of species and reactions for all but the very simplest of chemistries. We present a framework for optimization under uncertainty for heterogeneous catalysis reaction networks using surrogate models that are trained on the fly. The surrogate model is constructed by teaching a Gaussian process adsorption energies based on group additivity fingerprints, combined with transition-state scaling relations and a simple classifier for determining the rate-limiting step. The surrogate model is iteratively used to predict the most important reaction step to be calculated explicitly with computationally demanding electronic structure theory. Applying these methods to the reaction of syngas on rhodium(111), we identify the most likely reaction mechanism. Propagating uncertainty throughout this process yields the likelihood that the final mechanism is complete given measurements on only a subset of the entire network and uncertainty in the underlying density functional theory calculations. PMID:28262694

  7. To address surface reaction network complexity using scaling relations machine learning and DFT calculations

    Science.gov (United States)

    Ulissi, Zachary W.; Medford, Andrew J.; Bligaard, Thomas; Nørskov, Jens K.

    2017-03-01

    Surface reaction networks involving hydrocarbons exhibit enormous complexity with thousands of species and reactions for all but the very simplest of chemistries. We present a framework for optimization under uncertainty for heterogeneous catalysis reaction networks using surrogate models that are trained on the fly. The surrogate model is constructed by teaching a Gaussian process adsorption energies based on group additivity fingerprints, combined with transition-state scaling relations and a simple classifier for determining the rate-limiting step. The surrogate model is iteratively used to predict the most important reaction step to be calculated explicitly with computationally demanding electronic structure theory. Applying these methods to the reaction of syngas on rhodium(111), we identify the most likely reaction mechanism. Propagating uncertainty throughout this process yields the likelihood that the final mechanism is complete given measurements on only a subset of the entire network and uncertainty in the underlying density functional theory calculations.

  8. Wavelet Correlation Feature Scale Entropy and Fuzzy Support Vector Machine Approach for Aeroengine Whole-Body Vibration Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Cheng-Wei Fei

    2013-01-01

    Full Text Available In order to correctly analyze aeroengine whole-body vibration signals, Wavelet Correlation Feature Scale Entropy (WCFSE and Fuzzy Support Vector Machine (FSVM (WCFSE-FSVM method was proposed by fusing the advantages of the WCFSE method and the FSVM method. The wavelet coefficients were known to be located in high Signal-to-Noise Ratio (S/N or SNR scales and were obtained by the Wavelet Transform Correlation Filter Method (WTCFM. This method was applied to address the whole-body vibration signals. The WCFSE method was derived from the integration of the information entropy theory and WTCFM, and was applied to extract the WCFSE values of the vibration signals. Among the WCFSE values, the WFSE1 and WCFSE2 values on the scale 1 and 2 from the high band of vibration signal were believed to acceptably reflect the vibration feature and were selected to construct the eigenvectors of vibration signals as fault samples to establish the WCFSE-FSVM model. This model was applied to aeroengine whole-body vibration fault diagnosis. Through the diagnoses of four vibration fault modes and the comparison of the analysis results by four methods (SVM, FSVM, WESE-SVM, WCFSE-FSVM, it is shown that the WCFSE-FSVM method is characterized by higher learning ability, higher generalization ability and higher anti-noise ability than other methods in aeroengine whole-vibration fault analysis. Meanwhile, this present study provides a useful insight for the vibration fault diagnosis of complex machinery besides an aeroengine.

  9. The maximum sizes of large scale structures in alternative theories of gravity

    CERN Document Server

    Bhattacharya, Sourav; Romano, Antonio Enea; Skordis, Constantinos; Tomaras, Theodore N

    2016-01-01

    The maximum size of a cosmic structure is given by the maximum turnaround radius -- the scale where the attraction due to its mass is balanced by the repulsion due to dark energy. We derive generic formulas for the estimation of the maximum turnaround radius in any theory of gravity obeying the Einstein equivalence principle, in two situations: on a spherically symmetric spacetime and on a perturbed Friedman-Robertson-Walker spacetime. We show that the two formulas agree. As an application of our formula, we calculate the maximum turnaround radius in the case of the Brans-Dicke theory of gravity. We find that for this theory, such maximum sizes always lie above the $\\Lambda$CDM value, by a factor $1 + \\frac{1}{3\\omega}$, where $\\omega\\gg 1$ is the Brans-Dicke parameter, implying consistency of the theory with current data.

  10. The far-infrared emitting region in local galaxies and QSOs: Size and scaling relations

    CERN Document Server

    Lutz, D; Contursi, A; Schreiber, N M Förster; Genzel, R; Graciá-Carpio, J; Herrera-Camus, R; Netzer, H; Sturm, E; Tacconi, L J; Tadaki, K; Veilleux, S

    2015-01-01

    We use Herschel 70 to 160um images to study the size of the far-infrared emitting region in 400 local galaxies and QSO hosts. The sample includes normal `main sequence' star forming galaxies, as well as infrared luminous galaxies and Palomar-Green QSOs, with different level and structure of star formation. Assuming gaussian spatial distribution of the far-infrared emission, the excellent stability of the Herschel point spread function allows us to measure sizes well below the PSF width, by subtracting widths in quadrature. We derive scalings of FIR size and surface brightness of local galaxies with FIR luminosity, with distance from the star forming `main sequence', and with FIR color. Luminosities LFIR~10^11Lsun can be reached with a variety of structures spanning 2 dex in size. Ultraluminous LFIR>~10^12Lsun galaxies far above the main sequence inevitably have small Re,70~0.5kpc FIR emitting regions with large surface brightness, and can be close to optically thick in the FIR on average over these regions. C...

  11. Why does offspring size affect performance? Integrating metabolic scaling with life-history theory.

    Science.gov (United States)

    Pettersen, Amanda K; White, Craig R; Marshall, Dustin J

    2015-11-22

    Within species, larger offspring typically outperform smaller offspring. While the relationship between offspring size and performance is ubiquitous, the cause of this relationship remains elusive. By linking metabolic and life-history theory, we provide a general explanation for why larger offspring perform better than smaller offspring. Using high-throughput respirometry arrays, we link metabolic rate to offspring size in two species of marine bryozoan. We found that metabolism scales allometrically with offspring size in both species: while larger offspring use absolutely more energy than smaller offspring, larger offspring use proportionally less of their maternally derived energy throughout the dependent, non-feeding phase. The increased metabolic efficiency of larger offspring while dependent on maternal investment may explain offspring size effects-larger offspring reach nutritional independence (feed for themselves) with a higher proportion of energy relative to structure than smaller offspring. These findings offer a potentially universal explanation for why larger offspring tend to perform better than smaller offspring but studies on other taxa are needed.

  12. Alzheimer's disease risk assessment using large-scale machine learning methods.

    Directory of Open Access Journals (Sweden)

    Ramon Casanova

    Full Text Available The goal of this work is to introduce new metrics to assess risk of Alzheimer's disease (AD which we call AD Pattern Similarity (AD-PS scores. These metrics are the conditional probabilities modeled by large-scale regularized logistic regression. The AD-PS scores derived from structural MRI and cognitive test data were tested across different situations using data from the Alzheimer's Disease Neuroimaging Initiative (ADNI study. The scores were computed across groups of participants stratified by cognitive status, age and functional status. Cox proportional hazards regression was used to evaluate associations with the distribution of conversion times from mild cognitive impairment to AD. The performances of classifiers developed using data from different types of brain tissue were systematically characterized across cognitive status groups. We also explored the performance of anatomical and cognitive-anatomical composite scores generated by combining the outputs of classifiers developed using different types of data. In addition, we provide the AD-PS scores performance relative to other metrics used in the field including the Spatial Pattern of Abnormalities for Recognition of Early AD (SPARE-AD index and total hippocampal volume for the variables examined.

  13. Communication: system-size scaling of Boltzmann and alternate Gibbs entropies.

    Science.gov (United States)

    Vilar, Jose M G; Rubi, J Miguel

    2014-05-28

    It has recurrently been proposed that the Boltzmann textbook definition of entropy S(E) = k ln Ω(E) in terms of the number of microstates Ω(E) with energy E should be replaced by the expression S(G)(E) = k ln Σ(E' < E)Ω(E') examined by Gibbs. Here, we show that SG either is equivalent to S in the macroscopic limit or becomes independent of the energy exponentially fast as the system size increases. The resulting exponential scaling makes the realistic use of SG unfeasible and leads in general to temperatures that are inconsistent with the notions of hot and cold.

  14. Finite-size scaling of two-point statistics and the turbulent energy cascade generators.

    Science.gov (United States)

    Cleve, Jochen; Dziekan, Thomas; Schmiegel, Jürgen; Barndorff-Nielsen, Ole E; Pearson, Bruce R; Sreenivasan, Katepalli R; Greiner, Martin

    2005-02-01

    Within the framework of random multiplicative energy cascade models of fully developed turbulence, finite-size-scaling expressions for two-point correlators and cumulants are derived, taking into account the observationally unavoidable conversion from an ultrametric to an Euclidean two-point distance. The comparison with two-point statistics of the surrogate energy dissipation, extracted from various wind tunnel and atmospheric boundary layer records, allows an accurate deduction of multiscaling exponents and cumulants, even at moderate Reynolds numbers for which simple power-law fits are not feasible. The extracted exponents serve as input for parametric estimates of the probabilistic cascade generator. Various cascade generators are evaluated.

  15. Finite-size scaling study of dynamic critical phenomena in a vapor-liquid transition

    Science.gov (United States)

    Midya, Jiarul; Das, Subir K.

    2017-01-01

    Via a combination of molecular dynamics (MD) simulations and finite-size scaling (FSS) analysis, we study dynamic critical phenomena for the vapor-liquid transition in a three dimensional Lennard-Jones system. The phase behavior of the model has been obtained via the Monte Carlo simulations. The transport properties, viz., the bulk viscosity and the thermal conductivity, are calculated via the Green-Kubo relations, by taking inputs from the MD simulations in the microcanonical ensemble. The critical singularities of these quantities are estimated via the FSS method. The results thus obtained are in nice agreement with the predictions of the dynamic renormalization group and mode-coupling theories.

  16. Finite size scaling analysis on Nagel-Schreckenberg model for traffic flow

    Science.gov (United States)

    Balouchi, Ashkan; Browne, Dana

    2015-03-01

    The traffic flow problem as a many-particle non-equilibrium system has caught the interest of physicists for decades. Understanding the traffic flow properties and though obtaining the ability to control the transition from the free-flow phase to the jammed phase plays a critical role in the future world of urging self-driven cars technology. We have studied phase transitions in one-lane traffic flow through the mean velocity, distributions of car spacing, dynamic susceptibility and jam persistence -as candidates for an order parameter- using the Nagel-Schreckenberg model to simulate traffic flow. The length dependent transition has been observed for a range of maximum velocities greater than a certain value. Finite size scaling analysis indicates power-law scaling of these quantities at the onset of the jammed phase.

  17. Scale-free Universal Spectrum for Atmospheric Aerosol Size Distribution for Davos, Mauna Loa and Izana

    CERN Document Server

    Selvam, A M

    2011-01-01

    Atmospheric flows exhibit fractal fluctuations and inverse power law form for power spectra indicating an eddy continuum structure for the selfsimilar fluctuations. A general systems theory for fractal fluctuations developed by the author is based on the simple visualisation that large eddies form by space-time integration of enclosed turbulent eddies, a concept analogous to Kinetic Theory of Gases in Classical Statistical Physics. The ordered growth of atmospheric eddy continuum is in dynamical equilibrium and is associated with Maximum Entropy Production. The model predicts universal (scale-free) inverse power law form for fractal fluctuations expressed in terms of the golden mean. Atmospheric particulates are held in suspension in the fractal fluctuations of vertical wind velocity. The mass or radius (size) distribution for homogeneous suspended atmospheric particulates is expressed as a universal scale-independent function of the golden mean, the total number concentration and the mean volume radius. Mode...

  18. Mode splitting in high-index-contrast grating with mini-scale finite size.

    Science.gov (United States)

    Wang, Zhixin; Ni, Liangfu; Zhang, Haiyang; Zhang, Hanxing; Jin, Jicheng; Peng, Chao; Hu, Weiwei

    2016-08-15

    The mode-splitting phenomenon within finite-size, mini-scale high-index-contrast gratings (HCGs) has been investigated theoretically and experimentally. The high-Q resonance splits into a series of in-plane modes due to the confinement of boundaries but can still survive even on a mini-scale footprint. Q factors up to ∼3300 and ∼2200 have been observed for the HCGs with footprints that are only 55  μm×300  μm and 27.5  μm×300  μm, which would be promising for realizing optical communication and sensing applications with compact footprint.

  19. Recent advances in micro- and nano-machining technologies

    Science.gov (United States)

    Gao, Shang; Huang, Han

    2016-12-01

    Device miniaturization is an emerging advanced technology in the 21st century. The miniaturization of devices in different fields requires production of micro- and nano-scale components. The features of these components range from the sub-micron to a few hundred microns with high tolerance to many engineering materials. These fields mainly include optics, electronics, medicine, bio-technology, communications, and avionics. This paper reviewed the recent advances in micro- and nano-machining technologies, including micro-cutting, micro-electrical-discharge machining, laser micro-machining, and focused ion beam machining. The four machining technologies were also compared in terms of machining efficiency, workpiece materials being machined, minimum feature size, maximum aspect ratio, and surface finish.

  20. Anisotropic modulus stabilisation. Strings at LHC scales with micron-sized extra dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Cicoli, M. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Burgess, C.P. [McMaster Univ., Hamilton (Canada). Dept. of Physics and Astronomy; Perimeter Institute for Theoretical Physics, Waterloo (Canada); Quevedo, F. [Cambridge Univ. (United Kingdom). DAMTP/CMS; Abdus Salam International Centre for Theoretical Physics, Trieste (Italy)

    2011-04-15

    We construct flux-stabilised Type IIB string compactifications whose extra dimensions have very different sizes, and use these to describe several types of vacua with a TeV string scale. Because we can access regimes where two dimensions are hierarchically larger than the other four, we find examples where two dimensions are micron-sized while the other four are at the weak scale in addition to more standard examples with all six extra dimensions equally large. Besides providing ultraviolet completeness, the phenomenology of these models is richer than vanilla large-dimensional models in several generic ways: (i) they are supersymmetric, with supersymmetry broken at sub-eV scales in the bulk but only nonlinearly realised in the Standard Model sector, leading to no MSSM superpartners for ordinary particles and many more bulk missing-energy channels, as in supersymmetric large extra dimensions (SLED); (ii) small cycles in the more complicated extra-dimensional geometry allow some KK states to reside at TeV scales even if all six extra dimensions are nominally much larger; (iii) a rich spectrum of string and KK states at TeV scales; and (iv) an equally rich spectrum of very light moduli exist having unusually small (but technically natural) masses, with potentially interesting implications for cosmology and astrophysics that nonetheless evade new-force constraints. The hierarchy problem is solved in these models because the extra-dimensional volume is naturally stabilised at exponentially large values: the extra dimensions are Calabi-Yau geometries with a 4D K3-fibration over a 2D base, with moduli stabilised within the well-established LARGE-Volume scenario. The new technical step is the use of poly-instanton corrections to the superpotential (which, unlike for simpler models, are present on K3-fibered Calabi-Yau compactifications) to obtain a large hierarchy between the sizes of different dimensions. For several scenarios we identify the low-energy spectrum and

  1. Effects of chlorpyrifos on soil carboxylesterase activity at an aggregate-size scale.

    Science.gov (United States)

    Sanchez-Hernandez, Juan C; Sandoval, Marco

    2017-08-01

    The impact of pesticides on extracellular enzyme activity has been mostly studied on the bulk soil scale, and our understanding of the impact on an aggregate-size scale remains limited. Because microbial processes, and their extracellular enzyme production, are dependent on the size of soil aggregates, we hypothesized that the effect of pesticides on enzyme activities is aggregate-size specific. We performed three experiments using an Andisol to test the interaction between carboxylesterase (CbE) activity and the organophosphorus (OP) chlorpyrifos. First, we compared esterase activity among aggregates of different size spiked with chlorpyrifos (10mgkg(-1) wet soil). Next, we examined the inhibition of CbE activity by chlorpyrifos and its metabolite chlorpyrifos-oxon in vitro to explore the aggregate size-dependent affinity of the pesticides for the active site of the enzyme. Lastly, we assessed the capability of CbEs to alleviate chlorpyrifos toxicity upon soil microorganisms. Our principal findings were: 1) CbE activity was significantly inhibited (30-67% of controls) in the microaggregates (1.0mm) compared with the corresponding controls (i.e., pesticide-free aggregates), 2) chlorpyrifos-oxon was a more potent CbE inhibitor than chlorpyrifos; however, no significant differences in the CbE inhibition were found between micro- and macroaggregates, and 3) dose-response relationships between CbE activity and chlorpyrifos concentrations revealed the capability of the enzyme to bind chlorpyrifos-oxon, which was dependent on the time of exposure. This chemical interaction resulted in a safeguarding mechanism against chlorpyrifos-oxon toxicity on soil microbial activity, as evidenced by the unchanged activity of dehydrogenase and related extracellular enzymes in the pesticide-treated aggregates. Taken together, these results suggest that environmental risk assessments of OP-polluted soils should consider the fractionation of soil in aggregates of different size to

  2. Scale-invariant neuronal avalanche dynamics and the cut-off in size distributions.

    Directory of Open Access Journals (Sweden)

    Shan Yu

    Full Text Available Identification of cortical dynamics strongly benefits from the simultaneous recording of as many neurons as possible. Yet current technologies provide only incomplete access to the mammalian cortex from which adequate conclusions about dynamics need to be derived. Here, we identify constraints introduced by sub-sampling with a limited number of electrodes, i.e. spatial 'windowing', for well-characterized critical dynamics-neuronal avalanches. The local field potential (LFP was recorded from premotor and prefrontal cortices in two awake macaque monkeys during rest using chronically implanted 96-microelectrode arrays. Negative deflections in the LFP (nLFP were identified on the full as well as compact sub-regions of the array quantified by the number of electrodes N (10-95, i.e., the window size. Spatiotemporal nLFP clusters organized as neuronal avalanches, i.e., the probability in cluster size, p(s, invariably followed a power law with exponent -1.5 up to N, beyond which p(s declined more steeply producing a 'cut-off' that varied with N and the LFP filter parameters. Clusters of size s≤N consisted mainly of nLFPs from unique, non-repeated cortical sites, emerged from local propagation between nearby sites, and carried spatial information about cluster organization. In contrast, clusters of size s>N were dominated by repeated site activations and carried little spatial information, reflecting greatly distorted sampling conditions. Our findings were confirmed in a neuron-electrode network model. Thus, avalanche analysis needs to be constrained to the size of the observation window to reveal the underlying scale-invariant organization produced by locally unfolding, predominantly feed-forward neuronal cascades.

  3. The size distribution, scaling properties and spatial organization of urban clusters: a global and regional perspective

    CERN Document Server

    Fluschnik, Till; Ros, Anselmo García Cantú; Zhou, Bin; Reusser, Dominik E; Kropp, Jürgen P; Rybski, Diego

    2014-01-01

    Human development has far-reaching impacts on the surface of the globe. The transformation of natural land cover occurs in different forms and urban growth is one of the most eminent transformative processes. We analyze global land cover data and extract cities as defined by maximally connected urban clusters. The analysis of the city size distribution for all cities on the globe confirms Zipf's law. Moreover, by investigating the percolation properties of the clustering of urban areas we assess the closeness to criticality for various countries. At the critical thresholds, the urban land cover of the countries undergoes a transition from separated clusters to a gigantic component on the country scale. We study the Zipf-exponents as a function of the closeness to percolation and find a systematic decrease with increasing scale, which could be the reason for deviating exponents reported in literature. Moreover, we investigate the average size of the clusters as a function of the proximity to percolation and fi...

  4. Small Scale Yielding Correction of Constraint Loss in Small Sized Fracture Toughness Test Specimens

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Maan Won; Kim, Min Chul; Lee, Bong Sang; Hong, Jun Hwa [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2005-07-01

    Fracture toughness data in the ductile-brittle transition region of ferritic steels show scatter produced by local sampling effects and specimen geometry dependence which results from relaxation in crack tip constraint. The ASTM E1921 provides a standard test method to define the median toughness temperature curve, so called Master Curve, for the material corresponding to a 1T crack front length and also defines a reference temperature, T{sub 0}, at which median toughness value is 100 MPam for a 1T size specimen. The ASTM E1921 procedures assume that high constraint, small scaling yielding (SSY) conditions prevail at fracture along the crack front. Violation of the SSY assumption occurs most often during tests of smaller specimens. Constraint loss in such cases leads to higher toughness values and thus lower T{sub 0} values. When applied to a structure with low constraint geometry, the standard fracture toughness estimates may lead to strongly over-conservative estimates. A lot of efforts have been made to adjust the constraint effect. In this work, we applied a small-scale yielding correction (SSYC) to adjust the constraint loss of 1/3PCVN and PCVN specimens which are relatively smaller than 1T size specimen at the fracture toughness Master Curve test.

  5. Finite size scaling RG: detailed description and applications to diluted Ising systems

    Science.gov (United States)

    de Figueiredo Neto, João Monteiro; de Oliveira, Suzana Maria Moss; de Oliveira, Paulo Murilo Castro

    1994-05-01

    The finite size scaling renormalisation group (FSSRG) was introduced in Europhysics Letters 20 (1992) 621. Based only on the finite size scaling hypothesis, with no further assumptions, it differs from other real space renormalisation groups (RSRGs) in the following essential point: one does not need to adopt any particular recipe exp(- H‧( S‧/ T = σ sP( S, S‧) exp[- H( S)/ T] relating the spin states S of the original system to the spin states S' of a renormalised system. The choice of a particular weight function P( S, S‧), e.g. the so called majority rule, is generally based on plausibility arguments, and involves uncontrollable approximations. In addition to being free from these drawbacks, FSSRG shares with RSRG some good features as, for instance, the possibility of extracting qualitative informations from multi-parameter RG flow diagrams, including crossovers, universality classes, universality breakings, multicriticalities, orders of transitions, etc. Other unpleasant consequences of particular weight functions, as the so called proliferation of parameters, are also absent in the FSSRG. Using it in three-dimensions, we were able to find a semi-unstable fixed point in the critical frontier concentration p versus exchange coupling J, characterizing a universality class crossover when one goes from pure to diluted Ising ferromagnets. The specific heat exponents we have obtained for the pure and diluted regimes are in agreement with the Harris criterion.

  6. Functional network construction in Arabidopsis using rule-based machine learning on large-scale data sets.

    Science.gov (United States)

    Bassel, George W; Glaab, Enrico; Marquez, Julietta; Holdsworth, Michael J; Bacardit, Jaume

    2011-09-01

    The meta-analysis of large-scale postgenomics data sets within public databases promises to provide important novel biological knowledge. Statistical approaches including correlation analyses in coexpression studies of gene expression have emerged as tools to elucidate gene function using these data sets. Here, we present a powerful and novel alternative methodology to computationally identify functional relationships between genes from microarray data sets using rule-based machine learning. This approach, termed "coprediction," is based on the collective ability of groups of genes co-occurring within rules to accurately predict the developmental outcome of a biological system. We demonstrate the utility of coprediction as a powerful analytical tool using publicly available microarray data generated exclusively from Arabidopsis thaliana seeds to compute a functional gene interaction network, termed Seed Co-Prediction Network (SCoPNet). SCoPNet predicts functional associations between genes acting in the same developmental and signal transduction pathways irrespective of the similarity in their respective gene expression patterns. Using SCoPNet, we identified four novel regulators of seed germination (ALTERED SEED GERMINATION5, 6, 7, and 8), and predicted interactions at the level of transcript abundance between these novel and previously described factors influencing Arabidopsis seed germination. An online Web tool to query SCoPNet has been developed as a community resource to dissect seed biology and is available at http://www.vseed.nottingham.ac.uk/.

  7. Scaling of plant size and age emerges from linked aboveground and belowground transport network properties

    Science.gov (United States)

    Manzoni, S.; Hunt, A. G.

    2016-12-01

    Vegetation growth modulates cycling of water, carbon, and nutrients at local-to-global scales. It is therefore critical to quantify plant growth rates and how they are constrained by environmental conditions (especially limited resource availability). Various theoretical approaches have been proposed to this aim. Specifically, allometric theory provides a powerful tool to describe plant growth form and function, but it is focused on the properties of plant xylem networks, neglecting any role played by soils in supplying water to plants. On the other hand, percolation theory addresses physical constraints imposed by the soil pore network to water and nutrient transport, neglecting roles of root networks and vegetation taking up soil resources. In this contribution, we merge these two perspectives to derive scaling relations between plant size (namely height) and age. Our guiding hypothesis is that the root network expands in the soil at a rate sufficient to match the rate of transport of water and nutrients in an idealized optimal fractal pore network, as predicted by percolation theory; with nutrient transport distance vs. time scaling exponent 0.82, and water transport (saturated conditions) distance vs. time scaling exponent 1. The root expansion rate is mirrored by growth aboveground, as in allometric theory, which predicts an isometric relation between root extension and plant height. Building on these results, we predict that the scaling of plant height and age should also have exponent 0.82 in natural systems where nutrients are heterogeneously distributed, and 1 in fertilized systems where nutrients are homogeneously distributed. These predictions are successfully tested with extensive datasets covering major plant functional types worldwide, showing that soil and root network properties constrain vegetation growth by setting limits to the rates of water and nutrient supply to plants.

  8. Scale size and life time of energy conversion regions observed by Cluster in the plasma sheet

    Directory of Open Access Journals (Sweden)

    M. Hamrin

    2009-11-01

    Full Text Available In this article, and in a companion paper by Hamrin et al. (2009 [Occurrence and location of concentrated load and generator regions observed by Cluster in the plasma sheet], we investigate localized energy conversion regions (ECRs in Earth's plasma sheet. From more than 80 Cluster plasma sheet crossings (660 h data at the altitude of about 15–20 RE in the summer and fall of 2001, we have identified 116 Concentrated Load Regions (CLRs and 35 Concentrated Generator Regions (CGRs. By examining variations in the power density, E·J, where E is the electric field and J is the current density obtained by Cluster, we have estimated typical values of the scale size and life time of the CLRs and the CGRs. We find that a majority of the observed ECRs are rather stationary in space, but varying in time. Assuming that the ECRs are cylindrically shaped and equal in size, we conclude that the typical scale size of the ECRs is 2 RE≲ΔSECR≲5 RE. The ECRs hence occupy a significant portion of the mid altitude plasma sheet. Moreover, the CLRs appear to be somewhat larger than the CGRs. The life time of the ECRs are of the order of 1–10 min, consistent with the large scale magnetotail MHD simulations of Birn and Hesse (2005. The life time of the CGRs is somewhat shorter than for the CLRs. On time scales of 1–10 min, we believe that ECRs rise and vanish in significant regions of the plasma sheet, possibly oscillating between load and generator character. It is probable that at least some of the observed ECRs oscillate energy back and forth in the plasma sheet instead of channeling it to the ionosphere.

  9. Mapping field-scale spatial patterns of size and activity of the denitrifier community.

    Science.gov (United States)

    Philippot, Laurent; Cuhel, Jiri; Saby, Nicolas P A; Chèneby, Dominique; Chronáková, Alicia; Bru, David; Arrouays, Dominique; Martin-Laurent, Fabrice; Simek, Miloslav

    2009-06-01

    There is ample evidence that microbial processes can exhibit large variations in activity on a field scale. However, very little is known about the spatial distribution of the microbial communities mediating these processes. Here we used geostatistical modelling to explore spatial patterns of size and activity of the denitrifying community, a functional guild involved in N-cycling, in a grassland field subjected to different cattle grazing regimes. We observed a non-random distribution pattern of the size of the denitrifier community estimated by quantification of the denitrification genes copy numbers with a macro-scale spatial dependence (6-16 m) and mapped the distribution of this functional guild in the field. The spatial patterns of soil properties, which were strongly affected by presence of cattle, imposed significant control on potential denitrification activity, potential N(2)O production and relative abundance of some denitrification genes but not on the size of the denitrifier community. Absolute abundance of most denitrification genes was not correlated with the distribution patterns of potential denitrification activity or potential N(2)O production. However, the relative abundance of bacteria possessing the nosZ gene encoding the N(2)O reductase in the total bacterial community was a strong predictor of the N(2)O/(N(2) + N(2)O) ratio, which provides evidence for a relationship between bacterial community composition based on the relative abundance of denitrifiers in the total bacterial community and ecosystem processes. More generally, the presented geostatistical approach allows integrated mapping of microbial communities, and hence can facilitate our understanding of relationships between the ecology of microbial communities and microbial processes along environmental gradients.

  10. Scaling of xylem and phloem transport capacity and resource usage with tree size.

    Science.gov (United States)

    Hölttä, Teemu; Kurppa, Miika; Nikinmaa, Eero

    2013-01-01

    Xylem and phloem need to maintain steady transport rates of water and carbohydrates to match the exchange rates of these compounds at the leaves. A major proportion of the carbon and nitrogen assimilated by a tree is allocated to the construction and maintenance of the xylem and phloem long distance transport tissues. This proportion can be expected to increase with increasing tree size due to the growing transport distances between the assimilating tissues, i.e., leaves and fine roots, at the expense of their growth. We formulated whole tree level scaling relations to estimate how xylem and phloem volume, nitrogen content and hydraulic conductance scale with tree size, and how these properties are distributed along a tree height. Xylem and phloem thicknesses and nitrogen contents were measured within varying positions in four tree species from Southern Finland. Phloem volume, nitrogen amount and hydraulic conductance were found to be concentrated toward the branch and stem apices, in contrast to the xylem where these properties were more concentrated toward the tree base. All of the species under study demonstrated very similar trends. Total nitrogen amount allocated to xylem and phloem was predicted to be comparable to the nitrogen amount allocated to the leaves in small and medium size trees, and to increase significantly above the nitrogen content of the leaves in larger trees. Total volume, hydraulic conductance and nitrogen content of the xylem were predicted to increase faster than that of the phloem with increasing tree height in small trees (xylem sapwood turnover to heartwood, if present, would maintain phloem conductance at the same level with xylem conductance with further increases in tree height. Further simulations with a previously published xylem-phloem transport model demonstrated that the Münch pressure flow hypothesis could explain phloem transport with increasing tree height even for the tallest trees.

  11. Socio-Economic Instability and the Scaling of Energy Use with Population Size.

    Directory of Open Access Journals (Sweden)

    John P DeLong

    Full Text Available The size of the human population is relevant to the development of a sustainable world, yet the forces setting growth or declines in the human population are poorly understood. Generally, population growth rates depend on whether new individuals compete for the same energy (leading to Malthusian or density-dependent growth or help to generate new energy (leading to exponential and super-exponential growth. It has been hypothesized that exponential and super-exponential growth in humans has resulted from carrying capacity, which is in part determined by energy availability, keeping pace with or exceeding the rate of population growth. We evaluated the relationship between energy use and population size for countries with long records of both and the world as a whole to assess whether energy yields are consistent with the idea of an increasing carrying capacity. We find that on average energy use has indeed kept pace with population size over long time periods. We also show, however, that the energy-population scaling exponent plummets during, and its temporal variability increases preceding, periods of social, political, technological, and environmental change. We suggest that efforts to increase the reliability of future energy yields may be essential for stabilizing both population growth and the global socio-economic system.

  12. Socio-Economic Instability and the Scaling of Energy Use with Population Size.

    Science.gov (United States)

    DeLong, John P; Burger, Oskar

    2015-01-01

    The size of the human population is relevant to the development of a sustainable world, yet the forces setting growth or declines in the human population are poorly understood. Generally, population growth rates depend on whether new individuals compete for the same energy (leading to Malthusian or density-dependent growth) or help to generate new energy (leading to exponential and super-exponential growth). It has been hypothesized that exponential and super-exponential growth in humans has resulted from carrying capacity, which is in part determined by energy availability, keeping pace with or exceeding the rate of population growth. We evaluated the relationship between energy use and population size for countries with long records of both and the world as a whole to assess whether energy yields are consistent with the idea of an increasing carrying capacity. We find that on average energy use has indeed kept pace with population size over long time periods. We also show, however, that the energy-population scaling exponent plummets during, and its temporal variability increases preceding, periods of social, political, technological, and environmental change. We suggest that efforts to increase the reliability of future energy yields may be essential for stabilizing both population growth and the global socio-economic system.

  13. Phenotypic consequences of polyploidy and genome size at the microevolutionary scale: a multivariate morphological approach.

    Science.gov (United States)

    Balao, Francisco; Herrera, Javier; Talavera, Salvador

    2011-10-01

    • Chromosomal duplications and increases in DNA amount have the potential to alter quantitative plant traits like flower number, plant stature or stomata size. This has been documented often across species, but information on whether such effects also occur within species (i.e. at the microevolutionary or population scale) is scarce. • We studied trait covariation associated with polyploidy and genome size (both monoploid and total) in 22 populations of Dianthus broteri s.l., a perennial herb with several cytotypes (2x, 4x, 6x and 12x) that do not coexist spatially. Principal component scores of organ size/number variations were assessed as correlates of polyploidy, and phylogenetic relatedness among populations was controlled using phylogenetic generalized least squares. • Polyploidy covaried with organ dimensions, causing multivariate characters to increase, remain unchanged, or decrease with DNA amount. Variations in monoploid DNA amount had detectable consequences on some phenotypic traits. According to the analyses, some traits would experience phenotypic selection, while others would not. • We show that polyploidy contributes to decouple variation among traits in D. broteri, and hypothesize that polyploids may experience an evolutionary advantage in this plant lineage, for example, if it helps to overcome the constraints imposed by trait integration.

  14. Large Scale Behavior and Droplet Size Distributions in Crude Oil Jets and Plumes

    Science.gov (United States)

    Katz, Joseph; Murphy, David; Morra, David

    2013-11-01

    The 2010 Deepwater Horizon blowout introduced several million barrels of crude oil into the Gulf of Mexico. Injected initially as a turbulent jet containing crude oil and gas, the spill caused formation of a subsurface plume stretching for tens of miles. The behavior of such buoyant multiphase plumes depends on several factors, such as the oil droplet and bubble size distributions, current speed, and ambient stratification. While large droplets quickly rise to the surface, fine ones together with entrained seawater form intrusion layers. Many elements of the physics of droplet formation by an immiscible turbulent jet and their resulting size distribution have not been elucidated, but are known to be significantly influenced by the addition of dispersants, which vary the Weber Number by orders of magnitude. We present experimental high speed visualizations of turbulent jets of sweet petroleum crude oil (MC 252) premixed with Corexit 9500A dispersant at various dispersant to oil ratios. Observations were conducted in a 0.9 m × 0.9 m × 2.5 m towing tank, where large-scale behavior of the jet, both stationary and towed at various speeds to simulate cross-flow, have been recorded at high speed. Preliminary data on oil droplet size and spatial distributions were also measured using a videoscope and pulsed light sheet. Sponsored by Gulf of Mexico Research Initiative (GoMRI).

  15. Scaling Relations of Star-Forming Regions: from kpc-size clumps to HII regions

    CERN Document Server

    Wisnioski, Emily; Blake, Chris; Poole, Gregory B; Green, Andrew W; Wyder, Ted; Martin, Chris

    2012-01-01

    We present the properties of 8 star-forming regions, or 'clumps,' in 3 galaxies at z~1.3 from the WiggleZ Dark Energy Survey, which are resolved with the OSIRIS integral field spectrograph. Within turbulent discs, \\sigma~90 km/s, clumps are measured with average sizes of 1.5 kpc and average Jeans masses of 4.2 x 10^9 \\Msolar, in total accounting for 20-30 per cent of the stellar mass of the discs. These findings lend observational support to models that predict larger clumps will form as a result of higher disc velocity dispersions driven-up by cosmological gas accretion. As a consequence of the changes in global environment, it may be predicted that star-forming regions at high redshift should not resemble star-forming regions locally. Yet despite the increased sizes and dispersions, clumps and HII regions are found to follow tight scaling relations over the range z=0-2 for size, velocity dispersion, luminosity, and mass when comparing >2000 HII regions locally and 30 clumps at z>1 (\\sigma \\propto r^{0.42+/-...

  16. Ejecta- and Size-Scaling Considerations from Impacts of Glass Projectiles into Sand

    Science.gov (United States)

    Anderson J. L. B.; Cintala, M. J.; Siebenaler, S. A.; Barnouin-Jha, O. S.

    2007-01-01

    One of the most promising means of learning how initial impact conditions are related to the processes leading to the formation of a planetary-scale crater is through scaling relationships.1,2,3 The first phase of deriving such relationships has led to great insight into the cratering process and has yielded predictive capabilities that are mathematically rigorous and internally consistent. Such derivations typically have treated targets as continuous media; in many, cases, however, planetary materials represent irregular and discontinuous targets, the effects of which on the scaling relationships are still poorly understood.4,5 We continue to examine the effects of varying impact conditions on the excavation and final dimensions of craters formed in sand. Along with the more commonly treated variables such as impact speed, projectile size and material, and impact angle,6 such experiments also permit the study of changing granularity and friction angle of the target materials. This contribution presents some of the data collected during and after the impact of glass spheres into a medium-grained sand.

  17. Excess area dependent scaling behavior of nano-sized membrane tethers

    CERN Document Server

    Ramakrishnan, N; Eckmann, David M; Ayyaswamy, Portnovo S; Baumgart, Tobias; Pucadyil, Thomas; Patil, Shivprasad; Weaver, Valerie M; Radhakrishnan, Ravi

    2016-01-01

    Thermal fluctuations in cell membranes manifest as an excess area (${\\cal A}_{\\rm ex}$) which governs a multitude of physical process at the sub-micron scale. We present a theoretical framework, based on an in silico tether pulling method, which may be used to reliably estimate ${\\cal A}_{\\rm ex}$ in live cells. The tether forces estimated from our simulations compare well with our experimental measurements for tethers extracted from ruptured GUVs and HeLa cells. We demonstrate the significance and validity of our method by showing that all our calculations along with experiments of tether extraction in 15 different cell types collapse onto two unified scaling relationships mapping tether force, tether radius, bending stiffness $\\kappa$, and membrane tension $\\sigma$. We show that $R_{\\rm bead}$, the size of the wetting region, is an important determinant of the radius of the extracted tether, which is equal to $\\xi=\\sqrt{\\kappa/2\\sigma}$ (a characteristic length scale of the membrane) for $R_{\\rm bead}\\xi$. ...

  18. Finite-Size Scaling of Non-Gaussian Fluctuations Near the QCD Critical Point

    CERN Document Server

    Lacey, Roy A; Magdy, Niseem; Schweid, B; Ajitanand, N N

    2016-01-01

    Finite-Size Scaling (FSS) of moment products from recent STAR measurements of the variance $\\sigma$, skewness $S$ and kurtosis $\\kappa$ of net-proton multiplicity distributions, are reported for a broad range of collision centralities in Au+Au ($\\sqrt{s_{NN}}= 7.7 - 200$ GeV) collisions. The products $S\\sigma $ and $\\kappa \\sigma^2 $, which are directly related to the hgher-order baryon number susceptibility ratios $\\chi^{(3)}_B/\\chi^{(2)}_B$ and $\\chi^{(4)}_B/\\chi^{(2)}_B$, show scaling patterns consistent with earlier indications for a second order phase transition at a critical end point (CEP) in the plane of temperature vs. baryon chemical potential ($T,\\mu_B$) of the QCD phase diagram. The resulting scaling functions validate the earlier estimates of $T^{\\text{cep}} \\sim 165$~MeV and $\\mu_B^{\\text{cep}} \\sim 95$~MeV for the location of the CEP, and the critical exponents used to assign its 3D Ising model universality class.

  19. Finite-size corrections and scaling for the dimer model on the checkerboard lattice

    Science.gov (United States)

    Izmailian, Nickolay Sh.; Wu, Ming-Chya; Hu, Chin-Kun

    2016-11-01

    Lattice models are useful for understanding behaviors of interacting complex many-body systems. The lattice dimer model has been proposed to study the adsorption of diatomic molecules on a substrate. Here we analyze the partition function of the dimer model on a 2 M ×2 N checkerboard lattice wrapped on a torus and derive the exact asymptotic expansion of the logarithm of the partition function. We find that the internal energy at the critical point is equal to zero. We also derive the exact finite-size corrections for the free energy, the internal energy, and the specific heat. Using the exact partition function and finite-size corrections for the dimer model on a finite checkerboard lattice, we obtain finite-size scaling functions for the free energy, the internal energy, and the specific heat of the dimer model. We investigate the properties of the specific heat near the critical point and find that the specific-heat pseudocritical point coincides with the critical point of the thermodynamic limit, which means that the specific-heat shift exponent λ is equal to ∞ . We have also considered the limit N →∞ for which we obtain the expansion of the free energy for the dimer model on the infinitely long cylinder. From a finite-size analysis we have found that two conformal field theories with the central charges c =1 for the height function description and c =-2 for the construction using a mapping of spanning trees can be used to describe the dimer model on the checkerboard lattice.

  20. Bayesian hierarchical model used to analyze regression between fish body size and scale size: application to rare fish species Zingel asper

    Directory of Open Access Journals (Sweden)

    Fontez B.

    2014-04-01

    Full Text Available Back-calculation allows to increase available data on fish growth. The accuracy of back-calculation models is of paramount importance for growth analysis. Frequentist and Bayesian hierarchical approaches were used for regression between fish body size and scale size for the rare fish species Zingel asper. The Bayesian approach permits more reliable estimation of back-calculated size, taking into account biological information and cohort variability. This method greatly improves estimation of back-calculated length when sampling is uneven and/or small.

  1. An HTS machine laboratory prototype

    DEFF Research Database (Denmark)

    Mijatovic, Nenad; Jensen, Bogi Bech; Træholt, Chresten

    2012-01-01

    This paper describes Superwind HTS machine laboratory setup which is a small scale HTS machine designed and build as a part of the efforts to identify and tackle some of the challenges the HTS machine design may face. One of the challenges of HTS machines is a Torque Transfer Element (TTE) which...

  2. Size scale dependence of compressive instabilities in layered composites in the presence of stress gradients

    DEFF Research Database (Denmark)

    Poulios, Konstantinos; Niordson, Christian Frithiof

    2016-01-01

    The compressive strength of unidirectionally or layer-wise reinforced composite materials in direction parallel to their reinforcement is limited by micro-buckling instabilities. Although the inherent compressive strength of a given material micro-structure can easily be determined by assessing its...... compressive stress but also on spatial stress or strain gradients, rendering failure initiation size scale dependent. The present work demonstrates and investigates the aforementioned effect through numerical simulations of periodically layered structures withnotches and holes under bending and compressive...... loads, respectively. The presented results emphasize the importance of the reinforcing layer thickness on the load carrying capacity of the investigated structures, at a constant volumetric fraction of the reinforcement. The observed strengthening at higher values of the relative layer thickness...

  3. C/NOFS observations of intermediate and transitional scale-size equatorial spread F irregularities

    Science.gov (United States)

    Rodrigues, F. S.; Kelley, M. C.; Roddy, P. A.; Hunton, D. E.; Pfaff, R. F.; de La Beaujardière, O.; Bust, G. S.

    We present initial results of the analysis of high sampling rate (512 Hz) measurements made by the Planar Langmuir Probe (PLP) instrument and the Vector Electric Field Instrument (VEFI) onboard the Communication/Navigation Outage Forecasting System (C/NOFS) satellite. This letter focuses on the analysis of irregularities with scale-sizes in the intermediate (0.1-10 km) and transitional (10-100 m) domains observed when the satellite was flying through a large equatorial spread F (ESF) depletion on the night of October 9-10, 2008 over South America. The results presented in this letter suggest the operation of a diffusive subrange in the density power spectra and the possibility of an inertial plasma regime being observed at relatively low altitudes as a result of the long-lasting solar minimum conditions.

  4. Simple rules govern finite-size effects in scale-free networks

    CERN Document Server

    Cuenda, Sara

    2011-01-01

    We give an intuitive though general explanation of the finite-size effect in scale-free networks in terms of the degree distribution of the starting network. This result clarifies the relevance of the starting network in the final degree distribution. We use two different approaches: the deterministic mean-field approximation used by Barab\\'asi and Albert (but taking into account the nodes of the starting network), and the probability distribution of the degree of each node, which considers the stochastic process. Numerical simulations show that the accuracy of the predictions of the mean-field approximation depend on the contribution of the dispersion in the final distribution. The results in terms of the probability distribution of the degree of each node are very accurate when compared to numerical simulations. The analysis of the standard deviation of the degree distribution allows us to assess the influence of the starting core when fitting the model to real data.

  5. Effect of composition and grain size on electrical discharge machining of BN--TiB sub 2 composites

    Energy Technology Data Exchange (ETDEWEB)

    Gadalla, A.M.; Bedi, H.S. (Chemical Engineering Department, Texas A M University, College Station, Texas (USA))

    1991-11-01

    TiB{sub 2} conducts the current and forms a liquid phase at the interface with BN. Neighboring crystals of BN and some TiB{sub 2} spall due to thermal shock. During pause periods parts of the liquid and fragments are flushed out by the dielectric. Composites rich in TiB{sub 2} or with fine TiB{sub 2} grains gave high material removal rates. Increasing the amount of conducting phase by 10% is as effective as decreasing the grain size from 11 to 7 {mu}m. Coarse TiB{sub 2} could withstand high pulse durations before wire breaks. Material removal rate increases with pulse duration, frequency, and current. For the same composition and grain size, increasing the pulse duration or current increased the crater depth (the roughness) up to a certain value, beyond which increasing these parameters yielded a smoother surface. The conductivity of the dielectric was effective only for compositions rich in TiB{sub 2} content. In such cases, higher water conductivity lowered the energy required for material removal.

  6. Debris flow grain size scales with sea surface temperature over glacial-interglacial timescales

    Science.gov (United States)

    D'Arcy, Mitch; Roda Boluda, Duna C.; Whittaker, Alexander C.; Araújo, João Paulo C.

    2015-04-01

    Debris flows are common erosional processes responsible for a large volume of sediment transfer across a range of landscapes from arid settings to the tropics. They are also significant natural hazards in populated areas. However, we lack a clear set of debris flow transport laws, meaning that: (i) debris flows remain largely neglected by landscape evolution models; (ii) we do not understand the sensitivity of debris flow systems to past or future climate changes; and (iii) it remains unclear how to interpret debris flow stratigraphy and sedimentology, for example whether their deposits record information about past tectonics or palaeoclimate. Here, we take a grain size approach to characterising debris flow deposits from 35 well-dated alluvial fan surfaces in Owens Valley, California. We show that the average grain sizes of these granitic debris flow sediments precisely scales with sea surface temperature throughout the entire last glacial-interglacial cycle, increasing by ~ 7 % per 1 ° C of climate warming. We compare these data with similar debris flow systems in the Mediterranean (southern Italy) and the tropics (Rio de Janeiro, Brazil), and find equivalent signals over a total temperature range of ~ 14 ° C. In each area, debris flows are largely governed by rainfall intensity during triggering storms, which is known to increase exponentially with temperature. Therefore, we suggest that these debris flow systems are transporting predictably coarser-grained sediment in warmer, stormier conditions. This implies that debris flow sedimentology is governed by discharge thresholds and may be a sensitive proxy for past changes in rainfall intensity. Our findings show that debris flows are sensitive to climate changes over short timescales (≤ 104 years) and therefore highlight the importance of integrating hillslope processes into landscape evolution models, as well as providing new observational constraints to guide this. Finally, we comment on what grain size

  7. Lifting a familiar object: visual size analysis, not memory for object weight, scales lift force.

    Science.gov (United States)

    Cole, Kelly J

    2008-07-01

    The brain can accurately predict the forces needed to efficiently manipulate familiar objects in relation to mechanical properties such as weight. These predictions involve memory or some type of central representation, but visual analysis of size also yields accurate predictions of the needed fingertip forces. This raises the issue of which process (weight memory or visual size analysis) is used during everyday life when handling familiar objects. Our aim was to determine if subjects use a sensorimotor memory of weight, or a visual size analysis, to predictively set their vertical lift force when lifting a recently handled object. Two groups of subjects lifted an opaque brown bottle filled with water (470 g) during the first experimental session, and then rested for 15 min in a different room. Both groups were told that they would lift the same bottle in their next session. However, the experimental group returned to lift a slightly smaller bottle filled with water (360 g) that otherwise was identical in appearance to the first bottle. The control group returned to lift the same bottle from the first session, which was only partially filled with water so that it also weighed 360 g. At the end of the second session subjects were asked if they observed any changes between sessions, but no subject indicated awareness of a specific change. An acceleration ratio was computed by dividing the peak vertical acceleration during the first lift of the second session by the average peak acceleration of the last five lifts during the first session. This ratio was >1 for the control subjects 1.30 (SEM 0.08), indicating that they scaled their lift force for the first lift of the second session based on a memory of the (heavier) bottle from the first session. In contrast, the acceleration ratio was 0.94 (0.10) for the experimental group (P < 0.011). We conclude that the experimental group processed visual cues concerning the size of the bottle. These findings raise the

  8. Scaling in Free-Swimming Fish and Implications for Measuring Size-at-Time in the Wild.

    Directory of Open Access Journals (Sweden)

    Franziska Broell

    Full Text Available This study was motivated by the need to measure size-at-age, and thus growth rate, in fish in the wild. We postulated that this could be achieved using accelerometer tags based first on early isometric scaling models that hypothesize that similar animals should move at the same speed with a stroke frequency that scales with length-1, and second on observations that the speed of primarily air-breathing free-swimming animals, presumably swimming 'efficiently', is independent of size, confirming that stroke frequency scales as length-1. However, such scaling relations between size and swimming parameters for fish remain mostly theoretical. Based on free-swimming saithe and sturgeon tagged with accelerometers, we introduce a species-specific scaling relationship between dominant tail beat frequency (TBF and fork length. Dominant TBF was proportional to length-1 (r2 = 0.73, n = 40, and estimated swimming speed within species was independent of length. Similar scaling relations accrued in relation to body mass-0.29. We demonstrate that the dominant TBF can be used to estimate size-at-time and that accelerometer tags with onboard processing may be able to provide size-at-time estimates among free-swimming fish and thus the estimation of growth rate (change in size-at-time in the wild.

  9. Scaling in Free-Swimming Fish and Implications for Measuring Size-at-Time in the Wild

    Science.gov (United States)

    Broell, Franziska; Taggart, Christopher T.

    2015-01-01

    This study was motivated by the need to measure size-at-age, and thus growth rate, in fish in the wild. We postulated that this could be achieved using accelerometer tags based first on early isometric scaling models that hypothesize that similar animals should move at the same speed with a stroke frequency that scales with length-1, and second on observations that the speed of primarily air-breathing free-swimming animals, presumably swimming ‘efficiently’, is independent of size, confirming that stroke frequency scales as length-1. However, such scaling relations between size and swimming parameters for fish remain mostly theoretical. Based on free-swimming saithe and sturgeon tagged with accelerometers, we introduce a species-specific scaling relationship between dominant tail beat frequency (TBF) and fork length. Dominant TBF was proportional to length-1 (r2 = 0.73, n = 40), and estimated swimming speed within species was independent of length. Similar scaling relations accrued in relation to body mass-0.29. We demonstrate that the dominant TBF can be used to estimate size-at-time and that accelerometer tags with onboard processing may be able to provide size-at-time estimates among free-swimming fish and thus the estimation of growth rate (change in size-at-time) in the wild. PMID:26673777

  10. Optimizing embedded sensor network design for catchment-scale snow-depth estimation using LiDAR and machine learning

    Science.gov (United States)

    Oroza, Carlos A.; Zheng, Zeshi; Glaser, Steven D.; Tuia, Devis; Bales, Roger C.

    2016-10-01

    We evaluate the accuracy of a machine-learning algorithm that uses LiDAR data to optimize ground-based sensor placements for catchment-scale snow measurements. Sampling locations that best represent catchment physiographic variables are identified with the Expectation Maximization algorithm for a Gaussian mixture model. A Gaussian process is then used to model the snow depth in a 1 km2 area surrounding the network, and additional sensors are placed to minimize the model uncertainty. The aim of the study is to determine the distribution of sensors that minimizes the bias and RMSE of the model. We compare the accuracy of the snow-depth model using the proposed placements to an existing sensor network at the Southern Sierra Critical Zone Observatory. Each model is validated with a 1 m2 LiDAR-derived snow-depth raster from 14 March 2010. The proposed algorithm exhibits higher accuracy with fewer sensors (8 sensors, RMSE 38.3 cm, bias = 3.49 cm) than the existing network (23 sensors, RMSE 53.0 cm, bias = 15.5 cm) and randomized placements (8 sensors, RMSE 63.7 cm, bias = 24.7 cm). We then evaluate the spatial and temporal transferability of the method using 14 LiDAR scenes from two catchments within the JPL Airborne Snow Observatory. In each region, the optimized sensor placements are determined using the first available snow raster for the year. The accuracy in the remaining LiDAR surveys is then compared to 100 configurations of sensors selected at random. We find the error statistics (bias and RMSE) to be more consistent across the additional surveys than the average random configuration.

  11. Intraspecific Scaling Relationships Between Crawling Speed and Body Size in a Gastropod.

    Science.gov (United States)

    Hemmert, Heather M; Baltzley, Michael J

    2016-02-01

    Across various modes of locomotion, body size and speed are often correlated both between and within species. Among the gastropods, however, current data are minimal for interspecific and intraspecific scaling relationships. In this study, we tested the relationships between various measurements of body size and crawling speed in the terrestrial snail Cornu aspersum. We also investigated the relationships between crawling speed, muscular wave frequency, and muscular wavelength, because--while these relationships within individuals are well studied--the relationships among individuals are unknown. We recorded snails crawling on both a horizontal and a vertical surface. We found that when they crawled on a horizontal surface, foot length was positively correlated with pedal wavelength and crawling speed, but was not correlated with wave frequency. In comparison, when they crawled on a vertical surface, foot length was positively correlated with wavelength, negatively correlated with wave frequency, and not correlated with crawling speed. Body mass had no correlation with crawling speed when snails were crawling on a horizontal surface, but was negatively correlated with speed when snails crawled on a vertical surface.

  12. Scaling of heat production by thermogenic flowers: limits to floral size and maximum rate of respiration.

    Science.gov (United States)

    Seymour, Roger S

    2010-09-01

    Effect of size of inflorescences, flowers and cones on maximum rate of heat production is analysed allometrically in 23 species of thermogenic plants having diverse structures and ranging between 1.8 and 600 g. Total respiration rate (, micromol s(-1)) varies with spadix mass (M, g) according to in 15 species of Araceae. Thermal conductance (C, mW degrees C(-1)) for spadices scales according to C = 18.5M(0.73). Mass does not significantly affect the difference between floral and air temperature. Aroids with exposed appendices with high surface area have high thermal conductance, consistent with the need to vaporize attractive scents. True flowers have significantly lower heat production and thermal conductance, because closed petals retain heat that benefits resident insects. The florets on aroid spadices, either within a floral chamber or spathe, have intermediate thermal conductance, consistent with mixed roles. Mass-specific rates of respiration are variable between species, but reach 900 nmol s(-1) g(-1) in aroid male florets, exceeding rates of all other plants and even most animals. Maximum mass-specific respiration appears to be limited by oxygen delivery through individual cells. Reducing mass-specific respiration may be one selective influence on the evolution of large size of thermogenic flowers.

  13. Paper coatings with multi-scale roughness evaluated at different sampling sizes

    Energy Technology Data Exchange (ETDEWEB)

    Samyn, Pieter, E-mail: Pieter.Samyn@UGent.be [Ghent University - Department of Textiles, Technologiepark 907, B-9052 Zwijnaarde (Belgium); Van Erps, Juergen; Thienpont, Hugo [Vrije Universiteit Brussels - Department of Applied Physics and Photonics, Pleinlaan 2, B-1050 Brussels (Belgium); Schoukens, Gustaaf [Ghent University - Department of Textiles, Technologiepark 907, B-9052 Zwijnaarde (Belgium)

    2011-04-15

    Papers have a complex hierarchical structure and the end-user functionalities such as hydrophobicity are controlled by a finishing layer. The application of an organic nanoparticle coating and drying of the aqueous dispersion results in an unique surface morphology with microscale domains that are internally patterned with nanoparticles. Better understanding of the multi-scale surface roughness patterns is obtained by monitoring the topography with non-contact profilometry (NCP) and atomic force microscopy (AFM) at different sampling areas ranging from 2000 {mu}m x 2000 {mu}m to 0.5 {mu}m x 0.5 {mu}m. The statistical roughness parameters are uniquely related to each other over the different measuring techniques and sampling sizes, as they are purely statistically determined. However, they cannot be directly extrapolated over the different sampling areas as they represent transitions at the nano-, micro-to-nano and microscale level. Therefore, the spatial roughness parameters including the correlation length and the specific frequency bandwidth should be taken into account for each measurement, which both allow for direct correlation of roughness data at different sampling sizes.

  14. The Relationship between Student Achievement, School District Economies of Scale, School District Size, and Student Socioeconomic Status

    Science.gov (United States)

    Trani, Randy

    2009-01-01

    The relationships between student achievement, school district economies of scale, school district size and student socioeconomic status were measured for 131 school districts in the state of Oregon. Data for school districts ranging in size from districts with around 300 students to districts with more than 40,000 students were collected for…

  15. Determination of kinetic effects on particle size and concentration: instruction for scale up

    Science.gov (United States)

    Zhang, Ling; Nakamura, Hiroyuki; Lee, Changi; Uehara, Masato; Maeda, Hideaki

    2011-10-01

    Increasing the synthesis scale is one of the most important issues in nanocrystal synthesis. The main difference between small and large reactor is their thermal transfer rate which is reported to have great effects on particle nucleation and growth. In this paper, CdSe Quantum dots synthesis as was used as a model to investigate the heating rate effects in a microreactor system capable of precisely controlling the temperature and heating rate. Results showed that heating rate effects highly depended on the synthesis parameters. For example, in 5% Dodecanamine (DDA) case, there was no heating rate effect; while in the case of 20% DDA case, heating rate could affect both particle size distribution and morphology. Test experiments to demonstrate the up-scalability have been conducted and the results showed that products synthesized by batch reactor were comparable with microreactor products: Batch reactor gave same product when the DDA concentration was 5% but quite different product when the DDA concentration was 20%, compared with microreactor products. The data on the effects of heating rate obtained by this set up have high reliability and enable us to choose the proper method to increase the synthesis scale.

  16. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    Directory of Open Access Journals (Sweden)

    Pitchaiah Mandava

    Full Text Available OBJECTIVE: Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS, a range of scores ("Shift" is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. METHODS: We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. RESULTS: Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD. Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall p<0.001. Taking errors into account, SAINT I would have required 24% more subjects than were randomized. CONCLUSION: We show when uncertainty in assessments is considered, the lowest error rates are with dichotomization. While using the full range of mRS is conceptually appealing, a gain of information is counter-balanced by a decrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We

  17. Estimation of source parameters and scaling relations for moderate size earthquakes in North-West Himalaya

    Science.gov (United States)

    Kumar, Vikas; Kumar, Dinesh; Chopra, Sumer

    2016-10-01

    The scaling relation and self similarity of earthquake process have been investigated by estimating the source parameters of 34 moderate size earthquakes (mb 3.4-5.8) occurred in the NW Himalaya. The spectral analysis of body waves of 217 accelerograms recorded at 48 sites have been carried out using in the present analysis. The Brune's ω-2 model has been adopted for this purpose. The average ratio of the P-wave corner frequency, fc(P), to the S-wave corner frequency, fc(S), has been found to be 1.39 with fc(P) > fc(S) for 90% of the events analyzed here. This implies the shift in the corner frequency in agreement with many other similar studies done for different regions. The static stress drop values for all the events analyzed here lie in the range 10-100 bars average stress drop value of the order of 43 ± 19 bars for the region. This suggests the likely estimate of the dynamic stress drop, which is 2-3 times the static stress drop, is in the range of about 80-120 bars. This suggests the relatively high seismic hazard in the NW Himalaya as high frequency strong ground motions are governed by the stress drop. The estimated values of stress drop do not show significant variation with seismic moment for the range 5 × 1014-2 × 1017 N m. This observation along with the cube root scaling of corner frequencies suggests the self similarity of the moderate size earthquakes in the region. The scaling relation between seismic moment and corner frequency Mo fc3 = 3.47 ×1016Nm /s3 estimated in the present study can be utilized to estimate the source dimension given the seismic moment of the earthquake for the hazard assessment. The present study puts the constrains on the important parameters stress drop and source dimension required for the synthesis of strong ground motion from the future expected earthquakes in the region. Therefore, the present study is useful for the seismic hazard and risk related studies for NW Himalaya.

  18. Impact of atomic-scale surface morphology on the size-dependent yield stress of gold nanoparticles

    Science.gov (United States)

    Yang, Liang; Bian, Jian-Jun; Wang, Gang-Feng

    2017-06-01

    Size-dependent mechanical properties have been revealed for nanowires, nanopillars and nanoparticles. On the surfaces of these nanosized elements, discrete atomic-scale steps will be naturally generated, however their impact on the mechanical properties and deformation has seldom been a concern. In this paper, large-scale molecular dynamics simulations are conducted to calculate the yield stress of gold nanoparticles under compression. In addition to absolute particle size, atomic-scale surface morphology induces significant fluctuation of the yield stress. An analytical relation is advanced to predict the yield stress of nanoparticles accounting for the influence of both size and surface morphology, which agrees well with atomic simulations. This study illuminates an important mechanism in nanosized elements, atomic-scale surface steps.

  19. A statistical methodology to derive the scaling law for the H-mode power threshold using a large multi-machine database

    Science.gov (United States)

    Murari, A.; Lupelli, I.; Gaudio, P.; Gelfusa, M.; Vega, J.

    2012-06-01

    In this paper, a refined set of statistical techniques is developed and then applied to the problem of deriving the scaling law for the threshold power to access the H-mode of confinement in tokamaks. This statistical methodology is applied to the 2010 version of the ITPA International Global Threshold Data Base v6b(IGDBTHv6b). To increase the engineering and operative relevance of the results, only macroscopic physical quantities, measured in the vast majority of experiments, have been considered as candidate variables in the models. Different principled methods, such as agglomerative hierarchical variables clustering, without assumption about the functional form of the scaling, and nonlinear regression, are implemented to select the best subset of candidate independent variables and to improve the regression model accuracy. Two independent model selection criteria, based on the classical (Akaike information criterion) and Bayesian formalism (Bayesian information criterion), are then used to identify the most efficient scaling law from candidate models. The results derived from the full multi-machine database confirm the results of previous analysis but emphasize the importance of shaping quantities, elongation and triangularity. On the other hand, the scaling laws for the different machines and at different currents are different from each other at the level of confidence well above 95%, suggesting caution in the use of the global scaling laws for both interpretation and extrapolation purposes.

  20. Scaling of wingbeat frequency with body mass in bats and limits to maximum bat size.

    Science.gov (United States)

    Norberg, Ulla M Lindhe; Norberg, R Åke

    2012-03-01

    The ability to fly opens up ecological opportunities but flight mechanics and muscle energetics impose constraints, one of which is that the maximum body size must be kept below a rather low limit. The muscle power available for flight increases in proportion to flight muscle mass and wingbeat frequency. The maximum wingbeat frequency attainable among increasingly large animals decreases faster than the minimum frequency required, so eventually they coincide, thereby defining the maximum body mass at which the available power just matches up to the power required for sustained aerobic flight. Here, we report new wingbeat frequency data for 27 morphologically diverse bat species representing nine families, and additional data from the literature for another 38 species, together spanning a range from 2.0 to 870 g. For these species, wingbeat frequency decreases with increasing body mass as M(b)(-0.26). We filmed 25 of our 27 species in free flight outdoors, and for these the wingbeat frequency varies as M(b)(-0.30). These exponents are strikingly similar to the body mass dependency M(b)(-0.27) among birds, but the wingbeat frequency is higher in birds than in bats for any given body mass. The downstroke muscle mass is also a larger proportion of the body mass in birds. We applied these empirically based scaling functions for wingbeat frequency in bats to biomechanical theories about how the power required for flight and the power available converge as animal size increases. To this end we estimated the muscle mass-specific power required for the largest flying extant bird (12-16 kg) and assumed that the largest potential bat would exert similar muscle mass-specific power. Given the observed scaling of wingbeat frequency and the proportion of the body mass that is made up by flight muscles in birds and bats, we estimated the maximum potential body mass for bats to be 1.1-2.3 kg. The largest bats, extinct or extant, weigh 1.6 kg. This is within the range expected if it

  1. Scaling relationship for NO2 pollution and urban population size: a satellite perspective.

    Science.gov (United States)

    Lamsal, L N; Martin, R V; Parrish, D D; Krotkov, N A

    2013-07-16

    Concern is growing about the effects of urbanization on air pollution and health. Nitrogen dioxide (NO2) released primarily from combustion processes, such as traffic, is a short-lived atmospheric pollutant that serves as an air-quality indicator and is itself a health concern. We derive a global distribution of ground-level NO2 concentrations from tropospheric NO2 columns retrieved from the Ozone Monitoring Instrument (OMI). Local scaling factors from a three-dimensional chemistry-transport model (GEOS-Chem) are used to relate the OMI NO2 columns to ground-level concentrations. The OMI-derived surface NO2 data are significantly correlated (r = 0.69) with in situ surface measurements. We examine how the OMI-derived ground-level NO2 concentrations, OMI NO2 columns, and bottom-up NOx emission inventories relate to urban population. Emission hot spots, such as power plants, are excluded to focus on urban relationships. The correlation of surface NO2 with population is significant for the three countries and one continent examined here: United States (r = 0.71), Europe (r = 0.67), China (r = 0.69), and India (r = 0.59). Urban NO2 pollution, like other urban properties, is a power law scaling function of the population size: NO2 concentration increases proportional to population raised to an exponent. The value of the exponent varies by region from 0.36 for India to 0.66 for China, reflecting regional differences in industrial development and per capita emissions. It has been generally established that energy efficiency increases and, therefore, per capita NOx emissions decrease with urban population; here, we show how outdoor ambient NO2 concentrations depend upon urban population in different global regions.

  2. Finite Size Scaling and the Universality Class of SU(2) Lattice Gauge Theory

    Science.gov (United States)

    Staniford-Chen, Stuart Gresley

    For a system near a second order phase transition, the correlation length becomes extremely large. This gives rise to much interesting physics such as the existence of critical exponents and the division of physical theories into universality classes. SU(2) lattice gauge theory has such a phase transition at finite temperature and it has been persuasively argued in the literature that it should be in the same universality class as the Ising model in a space with dimensionality one less than the gauge theory. This is in the sense that the effective theory for the SU(2) Wilson lines is universal with the Ising model. This prediction has been checked for d = 3 + 1 SU(2) by comparing the critical exponents, and those checks appear to confirm it to the modest accuracy currently available. However, the theory of finite size scaling predicts a very rich set of objects which should be the same across universality classes. For example, the shape of the graph of various observables against temperature near the transition is universal. Not only that, but whole collections of probability distributions as a function of temperature can be given a scaling form and the shape of this object is universal. I develop a methodology for comparing such sets of distributions. This gives a two dimensional surface for each theory which can then be used in comparisons. I then use this approach and compare the surface for the order parameter in SU(2) with that in phi^4. The visual similarity is very striking. I perform a semi-quantitative error analysis which does not reveal significant differences between the two surfaces. This strengthens the idea that the SU(2) effective line theory is in the Ising universality class. I conclude by discussing the advantages and disadvantages of the method used here.

  3. Spatial scales of light transmission through Antarctic pack ice: Surface flooding vs. floe-size distribution

    Science.gov (United States)

    Arndt, S.; Meiners, K.; Krumpen, T.; Ricker, R.; Nicolaus, M.

    2016-12-01

    Snow on sea ice plays a crucial role for interactions between the ocean and atmosphere within the climate system of polar regions. Antarctic sea ice is covered with snow during most of the year. The snow contributes substantially to the sea-ice mass budget as the heavy snow loads can depress the ice below water level causing flooding. Refreezing of the snow and seawater mixture results in snow-ice formation on the ice surface. The snow cover determines also the amount of light being reflected, absorbed, and transmitted into the upper ocean, determining the surface energy budget of ice-covered oceans. The amount of light penetrating through sea ice into the upper ocean is of critical importance for the timing and amount of bottom sea-ice melt, biogeochemical processes and under-ice ecosystems. Here, we present results of several recent observations in the Weddell Sea measuring solar radiation under Antarctic sea ice with instrumented Remotely Operated Vehicles (ROV). The combination of under-ice optical measurements with simultaneous characterization of surface properties, such as sea-ice thickness and snow depth, allows the identification of key processes controlling the spatial distribution of the under-ice light. Thus, our results show how the distinction between flooded and non-flooded sea-ice regimes dominates the spatial scales of under-ice light variability for areas smaller than 100-by-100m. In contrast, the variability on larger scales seems to be controlled by the floe-size distribution and the associated lateral incidence of light. These results are related to recent studies on the spatial variability of Arctic under-ice light fields focusing on the distinctly differing dominant surface properties between the northern (e.g. summer melt ponds) and southern (e.g. year-round snow cover, surface flooding) hemisphere sea-ice cover.

  4. Top-spray fluid bed coating: Scale-up in terms of relative droplet size and drying force

    DEFF Research Database (Denmark)

    Hede, Peter Dybdahl; Bach, P.; Jensen, Anker Degn

    2008-01-01

    that none of the two parameters alone may be used for successful sealing. Morphology and microscope studies indicated that the coating layer is homogenous and has similar structures across scale only when both the drying force and the relative droplet size were fixed. Impact and attrition tests indicated......Top-spray fluid bed coating scale-up experiments have been performed in three scales in order to test the validity of two parameters as possible scaling parameters: The drying force and the relative droplet size. The aim was to be able to reproduce the degree of agglomeration as well...... as the mechanical properties of the coated granules across scale. Two types of placebo enzyme granule cores were tested being non-porous glass ballotini cores (180-350 mu m) and low porosity sodium sulphate cores (180-350 mu m). Both types of core materials were coated with aqueous solutions of Na2SO4 using Dextrin...

  5. When Machines Design Machines!

    DEFF Research Database (Denmark)

    2011-01-01

    Until recently we were the sole designers, alone in the driving seat making all the decisions. But, we have created a world of complexity way beyond human ability to understand, control, and govern. Machines now do more trades than humans on stock markets, they control our power, water, gas...... and food supplies, manage our elevators, microclimates, automobiles and transport systems, and manufacture almost everything. It should come as no surprise that machines are now designing machines. The chips that power our computers and mobile phones, the robots and commercial processing plants on which we...... depend, all are now largely designed by machines. So what of us - will be totally usurped, or are we looking at a new symbiosis with human and artificial intelligences combined to realise the best outcomes possible. In most respects we have no choice! Human abilities alone cannot solve any of the major...

  6. High-latitude HF Doppler observations of ULF waves: 2. Waves with small spatial scale sizes

    Directory of Open Access Journals (Sweden)

    D. M. Wright

    Full Text Available The DOPE (Doppler Pulsation Experiment HF Doppler sounder located near Tromsø, Norway (geographic: 69.6°N 19.2°E; L = 6.3 is deployed to observe signatures, in the high-latitude ionosphere, of magnetospheric ULF waves. A type of wave has been identified which exhibits no simultaneous ground magnetic signature. They can be subdivided into two classes which occur in the dawn and dusk local time sectors respectively. They generally have frequencies greater than the resonance fundamentals of local field lines. It is suggested that these may be the signatures of high-m ULF waves where the ground magnetic signature has been strongly attenuated as a result of the scale size of the waves. The dawn population demonstrate similarities to a type of magnetospheric wave known as giant (Pg pulsations which tend to be resonant at higher harmonics on magnetic field lines. In contrast, the waves occurring in the dusk sector are believed to be related to the storm-time Pc5s previously reported in VHF radar data. Dst measurements support these observations by indicating that the dawn and dusk classes of waves occur respectively during geomagnetically quiet and more active intervals.

    Key words. Ionosphere (auroral ionosphere; ionosphere-magnetosphere interactions · Magnetospheric physics (MHD waves and instabilities

  7. Spontaneous chiral symmetry breaking in QCD:a finite-size scaling study on the lattice

    CERN Document Server

    Giusti, Leonardo; Giusti, Leonardo; Necco, Silvia

    2007-01-01

    Spontaneous chiral symmetry breaking in QCD with massless quarks at infinite volume can be seen in a finite box by studying, for instance, the dependence of the chiral condensate from the volume and the quark mass. We perform a feasibility study of this program by computing the quark condensate on the lattice in the quenched approximation of QCD at small quark masses. We carry out simulations in various topological sectors of the theory at several volumes, quark masses and lattice spacings by employing fermions with an exact chiral symmetry, and we focus on observables which are infrared stable and free from mass-dependent ultraviolet divergences. The numerical calculation is carried out with an exact variance-reduction technique, which is designed to be particularly efficient when spontaneous symmetry breaking is at work in generating a few very small low-lying eigenvalues of the Dirac operator. The finite-size scaling behaviour of the condensate in the topological sectors considered agrees, within our stati...

  8. Fast Training of Support Vector Machines Using Error-Center-Based Optimization

    Institute of Scientific and Technical Information of China (English)

    L. Meng; Q. H. Wu

    2005-01-01

    This paper presents a new algorithm for Support Vector Machine (SVM) training, which trains a machine based on the cluster centers of errors caused by the current machine. Experiments withvarious training sets show that the computation time of this new algorithm scales almost linear with training set size and thus may be applied to much larger training sets, in comparison to standard quadratic programming (QP) techniques.

  9. Urbanisation at multiple scales is associated with larger size and higher fecundity of an orb-weaving spider.

    Science.gov (United States)

    Lowe, Elizabeth C; Wilder, Shawn M; Hochuli, Dieter F

    2014-01-01

    Urbanisation modifies landscapes at multiple scales, impacting the local climate and changing the extent and quality of natural habitats. These habitat modifications significantly alter species distributions and can result in increased abundance of select species which are able to exploit novel ecosystems. We examined the effect of urbanisation at local and landscape scales on the body size, lipid reserves and ovary weight of Nephila plumipes, an orb weaving spider commonly found in both urban and natural landscapes. Habitat variables at landscape, local and microhabitat scales were integrated to create a series of indexes that quantified the degree of urbanisation at each site. Spider size was negatively associated with vegetation cover at a landscape scale, and positively associated with hard surfaces and anthropogenic disturbance on a local and microhabitat scale. Ovary weight increased in higher socioeconomic areas and was positively associated with hard surfaces and leaf litter at a local scale. The larger size and increased reproductive capacity of N.plumipes in urban areas show that some species benefit from the habitat changes associated with urbanisation. Our results also highlight the importance of incorporating environmental variables from multiple scales when quantifying species responses to landscape modification.

  10. Urbanisation at multiple scales is associated with larger size and higher fecundity of an orb-weaving spider.

    Directory of Open Access Journals (Sweden)

    Elizabeth C Lowe

    Full Text Available Urbanisation modifies landscapes at multiple scales, impacting the local climate and changing the extent and quality of natural habitats. These habitat modifications significantly alter species distributions and can result in increased abundance of select species which are able to exploit novel ecosystems. We examined the effect of urbanisation at local and landscape scales on the body size, lipid reserves and ovary weight of Nephila plumipes, an orb weaving spider commonly found in both urban and natural landscapes. Habitat variables at landscape, local and microhabitat scales were integrated to create a series of indexes that quantified the degree of urbanisation at each site. Spider size was negatively associated with vegetation cover at a landscape scale, and positively associated with hard surfaces and anthropogenic disturbance on a local and microhabitat scale. Ovary weight increased in higher socioeconomic areas and was positively associated with hard surfaces and leaf litter at a local scale. The larger size and increased reproductive capacity of N.plumipes in urban areas show that some species benefit from the habitat changes associated with urbanisation. Our results also highlight the importance of incorporating environmental variables from multiple scales when quantifying species responses to landscape modification.

  11. Scaling of stomatal size and density optimizes allocation of leaf epidermal space for gas exchange in angiosperms

    Science.gov (United States)

    de Boer, Hugo Jan; Price, Charles A.; Wagner-Cremer, Friederike; Dekker, Stefan C.; Franks, Peter J.; Veneklaas, Erik J.

    2015-04-01

    Stomata on plant leaves are key traits in the regulation of terrestrial fluxes of water and carbon. The basic morphology of stomata consists of a diffusion pore and two guard cells that regulate the exchange of CO2 and water vapour between the leaf interior and the atmosphere. This morphology is common to nearly all land plants, yet stomatal size (defined as the area of the guard cell pair) and stomatal density (the number of stomata per unit area) range over three orders of magnitude across species. Evolution of stomatal sizes and densities is driven by selection pressure on the anatomical maximum stomatal conductance (gsmax), which determines the operational range of leaf gas exchange. Despite the importance of stomata traits for regulating leaf gas exchange, a quantitative understanding of the relation between adaptation of gsmax and the underlying co-evolution of stomatal sizes and densities is still lacking. Here we develop a theoretical framework for a scaling relationship between stomatal sizes and densities within the constraints set by the allocation of epidermal space and stomatal gas exchange. Our theory predicts an optimal scaling relationship that maximizes gsmax and minimizes epidermal space allocation to stomata. We test whether stomatal sizes and densities reflect this optimal scaling with a global compilation of stomatal trait data on 923 species reflecting most major clades. Our results show optimal scaling between stomatal sizes and densities across all species in the compiled data set. Our results also show optimal stomatal scaling across angiosperm species, but not across gymnosperm and fern species. We propose that the evolutionary flexibility of angiosperms to adjust stomatal sizes underlies their optimal allocation of leaf epidermal space to gas exchange.

  12. Multi-objective component sizing of a power-split plug-in hybrid electric vehicle powertrain using Pareto-based natural optimization machines

    Science.gov (United States)

    Mozaffari, Ahmad; Vajedi, Mahyar; Chehresaz, Maryyeh; Azad, Nasser L.

    2016-03-01

    The urgent need to meet increasingly tight environmental regulations and new fuel economy requirements has motivated system science researchers and automotive engineers to take advantage of emerging computational techniques to further advance hybrid electric vehicle and plug-in hybrid electric vehicle (PHEV) designs. In particular, research has focused on vehicle powertrain system design optimization, to reduce the fuel consumption and total energy cost while improving the vehicle's driving performance. In this work, two different natural optimization machines, namely the synchronous self-learning Pareto strategy and the elitism non-dominated sorting genetic algorithm, are implemented for component sizing of a specific power-split PHEV platform with a Toyota plug-in Prius as the baseline vehicle. To do this, a high-fidelity model of the Toyota plug-in Prius is employed for the numerical experiments using the Autonomie simulation software. Based on the simulation results, it is demonstrated that Pareto-based algorithms can successfully optimize the design parameters of the vehicle powertrain.

  13. Evaluation of the column components size on the vapour enrichment and system performance in small power NH{sub 3}-H{sub 2}O absorption refrigeration machines

    Energy Technology Data Exchange (ETDEWEB)

    Sieres, Jaime; Fernandez-Seara, Jose [Area de Maquinas y Motores Termicos, Escuela Tecnica Superior de Ingenieros Industriales, Universidad de Vigo, Vigo (Spain)

    2006-06-15

    This paper presents an analysis of the influence of the distillation column components size on the vapour enrichment and system performance in small power NH{sub 3}-H{sub 2}O absorption machines with partial condensation. It is known that ammonia enrichment is required in this type of systems; otherwise water accumulates in the evaporator and strongly deteriorates the system performance and efficiency. The distillation column analysed consists of a stripping adiabatic section below the column feed point and an adiabatic rectifying packed section over it. The partial condensation of the vapour is produced at the top of the column by means of a heat integrated rectifier with the strong solution as coolant and a water cooled rectifier. Differential mathematical models based on mass and energy balances and heat and mass transfer equations have been developed for each one of the column sections and rectifiers, which allow defining their real dimensions. Results are shown for a given practical application. Specific geometric dimensions of the column components are considered. Different distillation column configurations are analysed by selecting and discarding the use of the possible components of the column and by changing their dimensions. The analysis and comparison of the different column arrangements has been based on the system COP and on the column dimensions. (author)

  14. Integrated Multi-Scale Data Analytics and Machine Learning for the Distribution Grid and Building-to-Grid Interface

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, Emma M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hendrix, Val [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chertkov, Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Deka, Deepjyoti [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-16

    This white paper introduces the application of advanced data analytics to the modernized grid. In particular, we consider the field of machine learning and where it is both useful, and not useful, for the particular field of the distribution grid and buildings interface. While analytics, in general, is a growing field of interest, and often seen as the golden goose in the burgeoning distribution grid industry, its application is often limited by communications infrastructure, or lack of a focused technical application. Overall, the linkage of analytics to purposeful application in the grid space has been limited. In this paper we consider the field of machine learning as a subset of analytical techniques, and discuss its ability and limitations to enable the future distribution grid and the building-to-grid interface. To that end, we also consider the potential for mixing distributed and centralized analytics and the pros and cons of these approaches. Machine learning is a subfield of computer science that studies and constructs algorithms that can learn from data and make predictions and improve forecasts. Incorporation of machine learning in grid monitoring and analysis tools may have the potential to solve data and operational challenges that result from increasing penetration of distributed and behind-the-meter energy resources. There is an exponentially expanding volume of measured data being generated on the distribution grid, which, with appropriate application of analytics, may be transformed into intelligible, actionable information that can be provided to the right actors – such as grid and building operators, at the appropriate time to enhance grid or building resilience, efficiency, and operations against various metrics or goals – such as total carbon reduction or other economic benefit to customers. While some basic analysis into these data streams can provide a wealth of information, computational and human boundaries on performing the analysis

  15. An Innovative Farm Scale Biogas/Composting Facility for a Sustainable Medium Size Dairy Farm

    Directory of Open Access Journals (Sweden)

    Abdel E. Ghaly

    2012-01-01

    Full Text Available Approach: The amount of energy related costs as a portion of the total farm operating cost can be as high as 29% and the continuing increase of the real cost of energy related farm input has been one of the major factors impacting the cost of agricultural production. However, agricultural has the potential of replacing some of the purchased energy in the form of fossil fuels, commercial fertilizer and field production of animal feed with bioenergy and organic fertilizer from onsite renewable biomass such as animal manure in order to economically and environmentally sustain it. The aim of this study was to develop an innovative energy efficient pilot scale anaerobic digester composting facility. Methodology: A solid/liquid manure separator farm scale anaerobic digester and composting facility for a medium sized dairy farm were designed, constructed and tested. In order to make the anaerobic digestion economically viable under Canadian climatic conditions, the design, installation and operation of the system were based on advantages gained from the digester as a component of the total farm management system. In addition to the biogas production, benefits related to manure handling and storage, environmental quality improvement through odor control and water pollution reduction, fertilizer recovery and water recycling, were considered. Results: The layout of the farm was modified to provide solutions for four environmental problems related to: disposal of milkhouse wastes and overflow from the manure storage facility into the fire pond. The system possesses high energy conversion efficiency at relatively low capital cost and reduced labour requirement and has indirect energy ramifications through the production of organic fertilizer (compost to replace expensive and energy consuming commercial fertilizer as well as the production of bioenergy (biogas which will reduce the demand for energy. The overflow from the system (purified water can be

  16. Finite-size scaling of the magnetization probability density for the critical Ising model in slab geometry

    Science.gov (United States)

    Lopes Cardozo, David; Holdsworth, Peter C. W.

    2016-04-01

    The magnetization probability density in d  =  2 and 3 dimensional Ising models in slab geometry of volume L\\paralleld-1× {{L}\\bot} is computed through Monte-Carlo simulation at the critical temperature and zero magnetic field. The finite-size scaling of this distribution and its dependence on the system aspect-ratio ρ =\\frac{{{L}\\bot}}{{{L}\\parallel}} and boundary conditions are discussed. In the limiting case ρ \\to 0 of a macroscopically large slab ({{L}\\parallel}\\gg {{L}\\bot} ) the distribution is found to scale as a Gaussian function for all tested system sizes and boundary conditions.

  17. Size and scaling in the mandible of living and extinct apes.

    Science.gov (United States)

    Ravosa, M J

    2000-01-01

    The purpose of this study is to fill a gap in our knowledge of dietary and allometric determinants of masticatory function and mandibular morphology in major catarrhine clades. To extend the implications of previous work on variation in mandibular form and function in other primates, a scaling analysis was performed on 20 extinct and 7 living non-cercopithecoid catarrhines or 'dental apes'. Results of allometric comparisons indicate that for a given jaw length, larger apes exhibit significantly more robust corpora and symphyses than smaller forms. This appears linked to size-related increases in dietary toughness and/or hardness, which in turn causes elevated mandibular loads and/or greater repetitive loading during unilateral mastication. Larger-bodied dental apes also display more curved symphyses, which also explains the positive allometry of symphysis width and height. In apes, proconsulids often evince more robust jaws while all hylobatids, Pan and Dryopithecus laietanus possess more gracile cross sections. In propliopithecids, Aegyptopithecus is always more robust than Propliopithecus. In proconsulids, Rangwapithecus and Micropithecus commonly exhibit more robust jaws whereas Dendropithecus and especially Simiolus are more gracile. Most of the larger taxa are folivorous and/or hard-object frugivorous pongids with relatively larger dentaries. Though apes have relatively wider corpora than cercopithecines due to greater axial twisting of the corpora during chewing, they are otherwise alike in robusticity levels. Smaller apes are similar to cercopithecines in evincing a relatively high degree of symphyseal curvature, while larger taxa are like colobines in having less curvature. Larger pongids resemble or even exceed colobine jaw proportions and thus appear to converge on colobines in terms of the mechanical properties of their diets.

  18. Population size estimation of men who have sex with men through the network scale-up method in Japan.

    Directory of Open Access Journals (Sweden)

    Satoshi Ezoe

    Full Text Available BACKGROUND: Men who have sex with men (MSM are one of the groups most at risk for HIV infection in Japan. However, size estimates of MSM populations have not been conducted with sufficient frequency and rigor because of the difficulty, high cost and stigma associated with reaching such populations. This study examined an innovative and simple method for estimating the size of the MSM population in Japan. We combined an internet survey with the network scale-up method, a social network method for estimating the size of hard-to-reach populations, for the first time in Japan. METHODS AND FINDINGS: An internet survey was conducted among 1,500 internet users who registered with a nationwide internet-research agency. The survey participants were asked how many members of particular groups with known population sizes (firepersons, police officers, and military personnel they knew as acquaintances. The participants were also asked to identify the number of their acquaintances whom they understood to be MSM. Using these survey results with the network scale-up method, the personal network size and MSM population size were estimated. The personal network size was estimated to be 363.5 regardless of the sex of the acquaintances and 174.0 for only male acquaintances. The estimated MSM prevalence among the total male population in Japan was 0.0402% without adjustment, and 2.87% after adjusting for the transmission error of MSM. CONCLUSIONS: The estimated personal network size and MSM prevalence seen in this study were comparable to those from previous survey results based on the direct-estimation method. Estimating population sizes through combining an internet survey with the network scale-up method appeared to be an effective method from the perspectives of rapidity, simplicity, and low cost as compared with more-conventional methods.

  19. Using a Virtual Experiment to Analyze Infiltration Process from Point to Grid-cell Size Scale

    Science.gov (United States)

    Barrios, M. I.

    2013-12-01

    The hydrological science requires the emergence of a consistent theoretical corpus driving the relationships between dominant physical processes at different spatial and temporal scales. However, the strong spatial heterogeneities and non-linearities of these processes make difficult the development of multiscale conceptualizations. Therefore, scaling understanding is a key issue to advance this science. This work is focused on the use of virtual experiments to address the scaling of vertical infiltration from a physically based model at point scale to a simplified physically meaningful modeling approach at grid-cell scale. Numerical simulations have the advantage of deal with a wide range of boundary and initial conditions against field experimentation. The aim of the work was to show the utility of numerical simulations to discover relationships between the hydrological parameters at both scales, and to use this synthetic experience as a media to teach the complex nature of this hydrological process. The Green-Ampt model was used to represent vertical infiltration at point scale; and a conceptual storage model was employed to simulate the infiltration process at the grid-cell scale. Lognormal and beta probability distribution functions were assumed to represent the heterogeneity of soil hydraulic parameters at point scale. The linkages between point scale parameters and the grid-cell scale parameters were established by inverse simulations based on the mass balance equation and the averaging of the flow at the point scale. Results have shown numerical stability issues for particular conditions and have revealed the complex nature of the non-linear relationships between models' parameters at both scales and indicate that the parameterization of point scale processes at the coarser scale is governed by the amplification of non-linear effects. The findings of these simulations have been used by the students to identify potential research questions on scale issues

  20. Connected shape-size pattern spectra for rotation and scale-invariant classification of gray-scale images

    NARCIS (Netherlands)

    Urbach, Erik R.; Roerdink, Jos B.T.M.; Wilkinson, Michael H.F.

    2007-01-01

    In this paper, we describe a multiscale and multishape morphological method for pattern-based analysis and classification of gray-scale images using connected operators. Compared with existing methods, which use structuring elements, our method has three advantages. First, in our method, the time ne

  1. Finite size dependence of scaling functions of the three dimensional O(4) model in an external field

    CERN Document Server

    Engels, J

    2014-01-01

    We calculate universal finite size scaling functions for the order parameter and the longitudinal susceptibility of the three-dimensional O(4) model. The phase transition of this model is supposed to be in the same universality class as the chiral transition of two-flavor QCD. The scaling functions serve as a testing device for QCD simulations on small lattices, where, for example, pseudocritical temperatures are difficult to determine. In addition, we have improved the infinite volume limit parametrization of the scaling functions by using newly generated high statistics data for the 3d O(4) model in the high temperature region on an L=120 lattice.

  2. Grain-Size Based Additivity Models for Scaling Multi-rate Uranyl Surface Complexation in Subsurface Sediments

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.; Hu, Qinhong

    2016-07-01

    This study statistically analyzed a grain-size based additivity model that has been proposed to scale reaction rates and parameters from laboratory to field. The additivity model assumed that reaction properties in a sediment including surface area, reactive site concentration, reaction rate, and extent can be predicted from field-scale grain size distribution by linearly adding reaction properties for individual grain size fractions. This study focused on the statistical analysis of the additivity model with respect to reaction rate constants using multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment as an example. Experimental data of rate-limited U(VI) desorption in a stirred flow-cell reactor were used to estimate the statistical properties of multi-rate parameters for individual grain size fractions. The statistical properties of the rate constants for the individual grain size fractions were then used to analyze the statistical properties of the additivity model to predict rate-limited U(VI) desorption in the composite sediment, and to evaluate the relative importance of individual grain size fractions to the overall U(VI) desorption. The result indicated that the additivity model provided a good prediction of the U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model, and U(VI) desorption in individual grain size fractions have to be simulated in order to apply the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel size fraction (2-8mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.

  3. Study of materials and machines for 3D printed large-scale, flexible electronic structures using fused deposition modeling

    Science.gov (United States)

    Hwang, Seyeon

    The 3 dimensional printing (3DP), called to additive manufacturing (AM) or rapid prototyping (RP), is emerged to revolutionize manufacturing and completely transform how products are designed and fabricated. A great deal of research activities have been carried out to apply this new technology to a variety of fields. In spite of many endeavors, much more research is still required to perfect the processes of the 3D printing techniques especially in the area of the large-scale additive manufacturing and flexible printed electronics. The principles of various 3D printing processes are briefly outlined in the Introduction Section. New types of thermoplastic polymer composites aiming to specified functional applications are also introduced in this section. Chapter 2 shows studies about the metal/polymer composite filaments for fused deposition modeling (FDM) process. Various metal particles, copper and iron particles, are added into thermoplastics polymer matrices as the reinforcement filler. The thermo-mechanical properties, such as thermal conductivity, hardness, tensile strength, and fracture mechanism, of composites are tested to figure out the effects of metal fillers on 3D printed composite structures for the large-scale printing process. In Chapter 3, carbon/polymer composite filaments are developed by a simple mechanical blending process with an aim of fabricating the flexible 3D printed electronics as a single structure. Various types of carbon particles consisting of multi-wall carbon nanotube (MWCNT), conductive carbon black (CCB), and graphite are used as the conductive fillers to provide the thermoplastic polyurethane (TPU) with improved electrical conductivity. The mechanical behavior and conduction mechanisms of the developed composite materials are observed in terms of the loading amount of carbon fillers in this section. Finally, the prototype flexible electronics are modeled and manufactured by the FDM process using Carbon/TPU composite filaments and

  4. Grain-Size Based Additivity Models for Scaling Multi-rate Uranyl Surface Complexation in Subsurface Sediments

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.; Hu, Qinhong

    2016-07-31

    The additivity model assumed that field-scale reaction properties in a sediment including surface area, reactive site concentration, and reaction rate can be predicted from field-scale grain-size distribution by linearly adding reaction properties estimated in laboratory for individual grain-size fractions. This study evaluated the additivity model in scaling mass transfer-limited, multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment. Experimental data of rate-limited U(VI) desorption in a stirred flow-cell reactor were used to estimate the statistical properties of the rate constants for individual grain-size fractions, which were then used to predict rate-limited U(VI) desorption in the composite sediment. The result indicated that the additivity model with respect to the rate of U(VI) desorption provided a good prediction of U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel-size fraction (2 to 8 mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.

  5. A scaling theory for the size distribution of emitted dust aerosols suggests climate models underestimate the size of the global dust cycle

    CERN Document Server

    Kok, Jasper F

    2010-01-01

    Mineral dust aerosols impact Earth's radiation budget through interactions with clouds, ecosystems, and radiation, which constitutes a substantial uncertainty in understanding past and predicting future climate changes. One of the causes of this large uncertainty is that the size distribution of emitted dust aerosols is poorly understood. The present study shows that regional and global circulation models (GCMs) overestimate the emitted fraction of clay aerosols (< 2 {\\mu}m diameter) by a factor of ~2 - 8 relative to measurements. This discrepancy is resolved by deriving a simple theoretical expression of the emitted dust size distribution that is in excellent agreement with measurements. This expression is based on the physics of the scale-invariant fragmentation of brittle materials, which is shown to be applicable to dust emission. Because clay aerosols produce a strong radiative cooling, the overestimation of the clay fraction causes GCMs to also overestimate the radiative cooling of a given quantity o...

  6. Finite-size scaling relations for a four-dimensional Ising model on Creutz cellular automatons

    Science.gov (United States)

    Merdan, Z.; Güzelsoy, E.

    2011-06-01

    The four-dimensional Ising model is simulated on Creutz cellular automatons using finite lattices with linear dimensions 4 ≤ L ≤ 8. The temperature variations and finite-size scaling plots of the specific heat and the Binder parameter verify the theoretically predicted expression near the infinite lattice critical temperature for 7, 14, and 21 independent simulations. Approximate values for the critical temperature of the infinite lattice of Tc(∞) = 6.6965(35), 6.6961(30), 6.6960(12), 6.6800(3), 6.6801(2), 6.6802(1) and 6.6925(22) (without the logarithmic factor), 6.6921(22) (without the logarithmic factor), 6.6909(2) (without the logarithmic factor), 6.6822(13) (with the logarithmic factor), 6.6819(11) (with the logarithmic factor), and 6.6808(8) (with the logarithmic factor) are obtained from the intersection points of the specific heat curves, the Binder parameter curves, and straight line fits of specific heat maxima for 7, 14, and 21 independent simulations, respectively. As the number of independent simulations increases, the results, 6.6802(1) and 6.6808(8), are in very good agreement with the results of a series expansion of Tc(∞), 6.6817(15) and 6.6802(2), the dynamic Monte Carlo value Tc(∞) = 6.6803(1), the cluster Monte Carlo value Tc(∞) = 6.680(1), and the Monte Carlo value using the Metropolis-Wolff cluster algorithm Tc(∞) = 6.6802632 ± 5 . 10-5. The average values calculated for the critical exponent of the specific heat are α =- 0.0402(15), - 0.0393(12), - 0.0391(11) with 7, 14, and 21 independent simulations, respectively. As the number of independent simulations increases, the result, α =- 0.0391(11), agrees with the series expansions result, α =- 0.12 ± 0.03 and the Monte Carlo result using the Metropolis-Wolff cluster algorithm, α ≥ 0 ± 0.04. However, α =- 0.0391(11) is inconsistent with the renormalization group prediction of α = 0.

  7. The dune size distribution and scaling relations of barchan dune fields

    NARCIS (Netherlands)

    Durán, O.; Schwämmle, V.; Lind, P.G.; Herrmann, H.J.

    2009-01-01

    Barchan dunes emerge as a collective phenomena involving the generation of thousands of them in so called barchan dune fields. By measuring the size and position of dunes in Moroccan barchan dune fields, we find that these dunes tend to distribute uniformly in space and follow an unique size distrib

  8. On the extent of size range and power law scaling for particles of natural carbonate fault cores

    Science.gov (United States)

    Billi, Andrea

    2007-09-01

    To determine the size range and both type and extent of the scaling laws for particles of loose natural carbonate fault rocks, six granular fault cores from Mesozoic carbonate strata of central Italy were sampled. Particle size distributions of twelve samples were determined by combining sieving and sedimentation methods. Results show that, regardless of the fault geometry, kinematics, and tectonic history, the size of fault rock particles respects a power law distribution across approximately four orders of magnitude. The fractal dimension ( D) of the particle size distribution in the analysed samples ranges between ˜2.0 and ˜3.5. A lower bound to the power law trend is evident in all samples except in those with the highest D-values; in these samples, the smallest analysed particles (˜0.0005 mm in diameter) were also included in the power law interval, meaning that the lower size limit of the power law distribution decreases for increasing D-values and that smallest particles start to be comminuted with increasing strain (i.e. increasing fault displacement and D-values). For increasing D-values, also the largest particles tends to decrease in number, but this evidence may be affected by a censoring bias connected with the sample size. Stick-slip behaviour is suggested for the studied faults on the basis of the inferred particle size evolutions. Although further analyses are necessary to make the results of this study more generalizable, the preliminary definition of the scaling rules for fault rock particles may serve as a tool for predicting a large scale of fault rock particles once a limited range is known. In particular, data from this study may result useful as input numbers in numerical models addressing the packing of fault rock particles for frictional and hydraulic purposes.

  9. A comparison of machine learning algorithms for chemical toxicity classification using a simulated multi-scale data model

    Directory of Open Access Journals (Sweden)

    Li Zhen

    2008-05-01

    Full Text Available Abstract Background Bioactivity profiling using high-throughput in vitro assays can reduce the cost and time required for toxicological screening of environmental chemicals and can also reduce the need for animal testing. Several public efforts are aimed at discovering patterns or classifiers in high-dimensional bioactivity space that predict tissue, organ or whole animal toxicological endpoints. Supervised machine learning is a powerful approach to discover combinatorial relationships in complex in vitro/in vivo datasets. We present a novel model to simulate complex chemical-toxicology data sets and use this model to evaluate the relative performance of different machine learning (ML methods. Results The classification performance of Artificial Neural Networks (ANN, K-Nearest Neighbors (KNN, Linear Discriminant Analysis (LDA, Naïve Bayes (NB, Recursive Partitioning and Regression Trees (RPART, and Support Vector Machines (SVM in the presence and absence of filter-based feature selection was analyzed using K-way cross-validation testing and independent validation on simulated in vitro assay data sets with varying levels of model complexity, number of irrelevant features and measurement noise. While the prediction accuracy of all ML methods decreased as non-causal (irrelevant features were added, some ML methods performed better than others. In the limit of using a large number of features, ANN and SVM were always in the top performing set of methods while RPART and KNN (k = 5 were always in the poorest performing set. The addition of measurement noise and irrelevant features decreased the classification accuracy of all ML methods, with LDA suffering the greatest performance degradation. LDA performance is especially sensitive to the use of feature selection. Filter-based feature selection generally improved performance, most strikingly for LDA. Conclusion We have developed a novel simulation model to evaluate machine learning methods for the

  10. The moon illusion and size-distance scaling--evidence for shared neural patterns.

    Science.gov (United States)

    Weidner, Ralph; Plewan, Thorsten; Chen, Qi; Buchner, Axel; Weiss, Peter H; Fink, Gereon R

    2014-08-01

    A moon near to the horizon is perceived larger than a moon at the zenith, although--obviously--the moon does not change its size. In this study, the neural mechanisms underlying the "moon illusion" were investigated using a virtual 3-D environment and fMRI. Illusory perception of an increased moon size was associated with increased neural activity in ventral visual pathway areas including the lingual and fusiform gyri. The functional role of these areas was further explored in a second experiment. Left V3v was found to be involved in integrating retinal size and distance information, thus indicating that the brain regions that dynamically integrate retinal size and distance play a key role in generating the moon illusion.

  11. The Effects of Transient Emotional State and Workload on Size Scaling in Perspective Displays

    Energy Technology Data Exchange (ETDEWEB)

    Tuan Q. Tran; Kimberly R. Raddatz

    2006-10-01

    Previous research has been devoted to the study of perceptual (e.g., number of depth cues) and cognitive (e.g., instructional set) factors that influence veridical size perception in perspective displays. However, considering that perspective displays have utility in high workload environments that often induce high arousal (e.g., aircraft cockpits), the present study sought to examine the effect of observers’ emotional state on the ability to perceive and judge veridical size. Within a dual-task paradigm, observers’ ability to make accurate size judgments was examined under conditions of induced emotional state (positive, negative, neutral) and high and low workload. Results showed that participants in both positive and negative induced emotional states were slower to make accurate size judgments than those not under induced emotional arousal. Results suggest that emotional state is an important factor that influences visual performance on perspective displays and is worthy of further study.

  12. Finite-size scaling tests for spectra in SU(3) lattice gauge theory coupled to 12 fundamental flavor fermions

    Science.gov (United States)

    Degrand, Thomas

    2011-12-01

    I carry out a finite-size scaling study of the correlation length in SU(3) lattice gauge theory coupled to 12 fundamental flavor fermions, using recent data published by Fodor, Holland, Kuti, Nógradi and Schroeder [Z. Fodor, K. Holland, J. Kuti, D. Nogradi, and C. Schroeder, Phys. Lett. B 703, 348 (2011).PYLBAJ0370-269310.1016/j.physletb.2011.07.037]. I make the assumption that the system is conformal in the zero-mass, infinite volume limit, that scaling is violated by both nonzero fermion mass and by finite volume, and that the scaling function in each channel is determined self-consistently by the data. From several different observables I extract a common exponent for the scaling of the correlation length ξ with the fermion mass mq, ξ˜mq-1/ym with ym˜1.35. Shortcomings of the analysis are discussed.

  13. Finite-size effects and scaling for the thermal QCD deconfinementphase transition within the exact color-singlet partition function

    Energy Technology Data Exchange (ETDEWEB)

    Ladrem, M.; Ait-El-Djoudi, A. [Ecole Normale Superieure-Kouba, Laboratoire de Physique des Particules et Physique Statistique, B.P. 92, Vieux-Kouba, Algiers (Algeria)

    2005-10-01

    We study the finite-size effects for the thermal quantum chromodynamics (QCD) deconfinement phase transition, and use a numerical finite-size scaling analysis to extract the scaling exponents characterizing its scaling behavior when approaching the thermodynamic limit (V{yields}{infinity}). For this, we use a simple model of coexistence of hadronic gas and color-singlet quark gluon plasma (QGP) phases in a finite volume. The color-singlet partition function of the QGP cannot be exactly calculated and is usually derived within the saddle-point approximation. When we try to do calculations with such an approximate color-singlet partition function, a problem arises in the limit of small temperatures and/or volumes VT{sup 3}<<1, requiring additional approximations if we want to carry out calculations. We propose in this work a method for an accurate calculation of any quantity of the finite system, without any approximation. By probing the behavior of some useful thermodynamic response functions on the whole range of temperature, it turns out that, in a finite-size system, all singularities in the thermodynamic limit are smeared out and the transition point is shifted away. A numerical finite-size scaling (FSS) analysis of the obtained data allows us to determine the scaling exponents of the QCD deconfinement phase transition. Our results expressing the equality between their values and the space dimensionality is a consequence of the singularity characterizing a first-order phase transition and agree very well with the predictions of other FSS theoretical approaches to a first-order phase transition and with the results of calculations using Monte Carlo methods in both lattice QCD and statistical physics models. (orig.)

  14. Optimal Siting and Sizing of Energy Storage System for Power Systems with Large-scale Wind Power Integration

    DEFF Research Database (Denmark)

    Zhao, Haoran; Wu, Qiuwei; Huang, Shaojun

    2015-01-01

    This paper proposes algorithms for optimal sitingand sizing of Energy Storage System (ESS) for the operationplanning of power systems with large scale wind power integration.The ESS in this study aims to mitigate the wind powerfluctuations during the interval between two rolling Economic...... optimal siting and sizing of storage units throughoutthe network. These questions are investigated using an IEEE benchmark system......Dispatches (EDs) in order to maintain generation-load balance.The charging and discharging of ESS is optimized consideringoperation cost of conventional generators, capital cost of ESSand transmission losses. The statistics from simulated systemoperations are then coupled to the planning process to determinethe...

  15. Insulin/IGF-regulated size scaling of neuroendocrine cells expressing the bHLH transcription factor Dimmed in Drosophila.

    Directory of Open Access Journals (Sweden)

    Jiangnan Luo

    Full Text Available Neurons and other cells display a large variation in size in an organism. Thus, a fundamental question is how growth of individual cells and their organelles is regulated. Is size scaling of individual neurons regulated post-mitotically, independent of growth of the entire CNS? Although the role of insulin/IGF-signaling (IIS in growth of tissues and whole organisms is well established, it is not known whether it regulates the size of individual neurons. We therefore studied the role of IIS in the size scaling of neurons in the Drosophila CNS. By targeted genetic manipulations of insulin receptor (dInR expression in a variety of neuron types we demonstrate that the cell size is affected only in neuroendocrine cells specified by the bHLH transcription factor DIMMED (DIMM. Several populations of DIMM-positive neurons tested displayed enlarged cell bodies after overexpression of the dInR, as well as PI3 kinase and Akt1 (protein kinase B, whereas DIMM-negative neurons did not respond to dInR manipulations. Knockdown of these components produce the opposite phenotype. Increased growth can also be induced by targeted overexpression of nutrient-dependent TOR (target of rapamycin signaling components, such as Rheb (small GTPase, TOR and S6K (S6 kinase. After Dimm-knockdown in neuroendocrine cells manipulations of dInR expression have significantly less effects on cell size. We also show that dInR expression in neuroendocrine cells can be altered by up or down-regulation of Dimm. This novel dInR-regulated size scaling is seen during postembryonic development, continues in the aging adult and is diet dependent. The increase in cell size includes cell body, axon terminations, nucleus and Golgi apparatus. We suggest that the dInR-mediated scaling of neuroendocrine cells is part of a plasticity that adapts the secretory capacity to changing physiological conditions and nutrient-dependent organismal growth.

  16. Insulin/IGF-regulated size scaling of neuroendocrine cells expressing the bHLH transcription factor Dimmed in Drosophila.

    Directory of Open Access Journals (Sweden)

    Jiangnan Luo

    Full Text Available Neurons and other cells display a large variation in size in an organism. Thus, a fundamental question is how growth of individual cells and their organelles is regulated. Is size scaling of individual neurons regulated post-mitotically, independent of growth of the entire CNS? Although the role of insulin/IGF-signaling (IIS in growth of tissues and whole organisms is well established, it is not known whether it regulates the size of individual neurons. We therefore studied the role of IIS in the size scaling of neurons in the Drosophila CNS. By targeted genetic manipulations of insulin receptor (dInR expression in a variety of neuron types we demonstrate that the cell size is affected only in neuroendocrine cells specified by the bHLH transcription factor DIMMED (DIMM. Several populations of DIMM-positive neurons tested displayed enlarged cell bodies after overexpression of the dInR, as well as PI3 kinase and Akt1 (protein kinase B, whereas DIMM-negative neurons did not respond to dInR manipulations. Knockdown of these components produce the opposite phenotype. Increased growth can also be induced by targeted overexpression of nutrient-dependent TOR (target of rapamycin signaling components, such as Rheb (small GTPase, TOR and S6K (S6 kinase. After Dimm-knockdown in neuroendocrine cells manipulations of dInR expression have significantly less effects on cell size. We also show that dInR expression in neuroendocrine cells can be altered by up or down-regulation of Dimm. This novel dInR-regulated size scaling is seen during postembryonic development, continues in the aging adult and is diet dependent. The increase in cell size includes cell body, axon terminations, nucleus and Golgi apparatus. We suggest that the dInR-mediated scaling of neuroendocrine cells is part of a plasticity that adapts the secretory capacity to changing physiological conditions and nutrient-dependent organismal growth.

  17. On the Scaling of Small, Heat Simulated Jet Noise Measurements to Moderate Size Exhaust Jets

    Science.gov (United States)

    McLaughlin, Dennis K.; Bridges, James; Kuo, Ching-Wen

    2010-01-01

    Modern military aircraft jet engines are designed with variable geometry nozzles to provide optimum thrust in different operating conditions, depending on the flight envelope. However, the acoustic measurements for such nozzles are scarce, due to the cost involved in making full scale measurements and the lack of details about the exact geometry of these nozzles. Thus the present effort at The Pennsylvania State University and the NASA Glenn Research Center- in partnership with GE Aviation is aiming to study and characterize the acoustic field produced by supersonic jets issuing from converging-diverging military style nozzles. An equally important objective is to validate methodology for using data obtained from small and moderate scale experiments to reliably predict the most important components of full scale engine noise. The experimental results presented show reasonable agreement between small scale and moderate scale jet acoustic data, as well as between heated jets and heat-simulated ones. Unresolved issues however are identified that are currently receiving our attention, in particular the effect of the small bypass ratio airflow. Future activities will identify and test promising noise reduction techniques in an effort to predict how well such concepts will work with full scale engines in flight conditions.

  18. Size, effect of flexible proof mass on the mechanical behavior of micron-scale cantilevers for energy harvesting appications.

    Energy Technology Data Exchange (ETDEWEB)

    Kim, M.; Hong, S.; Miller, D. J.; Dugundji, J.; Wardle, B. L. (Materials Science Division); (MIT)

    2011-12-15

    Mechanical behavior of micron-scale cantilevers with a distributed, flexible proof mass is investigated to understand proof mass size effects on the performance of microelectromechanical system energy harvesters. Single-crystal silicon beams with proof masses of various lengths were fabricated using focused ion beam milling and tested using atomic force microscopy. Comparison of three different modeling results with measured data reveals that a 'two-beam' method has the most accurate predictive capability in terms of both resonant frequency and strain. Accurate strain prediction is essential because energy harvested scales with strain squared and maximum strain will be a design limit in fatigue.

  19. Finite-size corrections to scaling behavior in sorted cell aggregates.

    Science.gov (United States)

    Klopper, A V; Krens, G; Grill, S W; Heisenberg, C-P

    2010-10-01

    Cell sorting is a widespread phenomenon pivotal to the early development of multicellular organisms. In vitro cell sorting studies have been instrumental in revealing the cellular properties driving this process. However, these studies have as yet been limited to two-dimensional analysis of three-dimensional cell sorting events. Here we describe a method to record the sorting of primary zebrafish ectoderm and mesoderm germ layer progenitor cells in three dimensions over time, and quantitatively analyze their sorting behavior using an order parameter related to heterotypic interface length. We investigate the cell population size dependence of sorted aggregates and find that the germ layer progenitor cells engulfed in the final configuration display a relationship between total interfacial length and system size according to a simple geometrical argument, subject to a finite-size effect.

  20. Size Scaling and Bursting Activity in Thermally Activated Breakdown of Fiber Bundles

    KAUST Repository

    Yoshioka, Naoki

    2008-10-03

    We study subcritical fracture driven by thermally activated damage accumulation in the framework of fiber bundle models. We show that in the presence of stress inhomogeneities, thermally activated cracking results in an anomalous size effect; i.e., the average lifetime tf decreases as a power law of the system size tf ∼L-z, where the exponent z depends on the external load σ and on the temperature T in the form z∼f(σ/T3/2). We propose a modified form of the Arrhenius law which provides a comprehensive description of thermally activated breakdown. Thermal fluctuations trigger bursts of breakings which have a power law size distribution. © 2008 The American Physical Society.

  1. The relationship between 19th century BMIs and family size: Economies of scale and positive externalities.

    Science.gov (United States)

    Carson, Scott Alan

    2015-04-01

    The use of body mass index values (BMI) to measure living standards is now a well-accepted method in economics. Nevertheless, a neglected area in historical studies is the relationship between 19th century BMI and family size, and this relationship is documented here to be positive. Material inequality and BMI are the subject of considerable debate, and there was a positive relationship between BMI and wealth and an inverse relationship with inequality. After controlling for family size and wealth, BMI values were related with occupations, and farmers and laborers had greater BMI values than workers in other occupations.

  2. Patch size has no effect on insect visitation rate per unit area in garden-scale flower patches

    Science.gov (United States)

    Garbuzov, Mihail; Madsen, Andy; Ratnieks, Francis L. W.

    2015-01-01

    Previous studies investigating the effect of flower patch size on insect flower visitation rate have compared relatively large patches (10-1000s m2) and have generally found a negative relationship per unit area or per flower. Here, we investigate the effects of patch size on insect visitation in patches of smaller area (range c. 0.1-3.1 m2), which are of particular relevance to ornamental flower beds in parks and gardens. We studied two common garden plant species in full bloom with 6 patch sizes each: borage (Borago officinalis) and lavender (Lavandula × intermedia 'Grosso'). We quantified flower visitation by insects by making repeated counts of the insects foraging at each patch. On borage, all insects were honey bees (Apis mellifera, n = 5506 counts). On lavender, insects (n = 737 counts) were bumble bees (Bombus spp., 76.9%), flies (Diptera, 22.4%), and butterflies (Lepidoptera, 0.7%). On both plant species we found positive linear effects of patch size on insect numbers. However, there was no effect of patch size on the number of insects per unit area or per flower and, on lavender, for all insects combined or only bumble bees. The results show that it is possible to make unbiased comparisons of the attractiveness of plant species or varieties to flower-visiting insects using patches of different size within the small scale range studied and make possible projects aimed at comparing ornamental plant varieties using existing garden flower patches of variable area.

  3. Effect of training data size and noise level on support vector machines virtual screening of genotoxic compounds from large compound libraries.

    Science.gov (United States)

    Kumar, Pankaj; Ma, Xiaohua; Liu, Xianghui; Jia, Jia; Bucong, Han; Xue, Ying; Li, Ze Rong; Yang, Sheng Yong; Wei, Yu Quan; Chen, Yu Zong

    2011-05-01

    Various in vitro and in-silico methods have been used for drug genotoxicity tests, which show limited genotoxicity (GT+) and non-genotoxicity (GT-) identification rates. New methods and combinatorial approaches have been explored for enhanced collective identification capability. The rates of in-silco methods may be further improved by significantly diversified training data enriched by the large number of recently reported GT+ and GT- compounds, but a major concern is the increased noise levels arising from high false-positive rates of in vitro data. In this work, we evaluated the effect of training data size and noise level on the performance of support vector machines (SVM) method known to tolerate high noise levels in training data. Two SVMs of different diversity/noise levels were developed and tested. H-SVM trained by higher diversity higher noise data (GT+ in any in vivo or in vitro test) outperforms L-SVM trained by lower noise lower diversity data (GT+ in in vivo or Ames test only). H-SVM trained by 4,763 GT+ compounds reported before 2008 and 8,232 GT- compounds excluding clinical trial drugs correctly identified 81.6% of the 38 GT+ compounds reported since 2008, predicted 83.1% of the 2,008 clinical trial drugs as GT-, and 23.96% of 168 K MDDR and 27.23% of 17.86M PubChem compounds as GT+. These are comparable to the 43.1-51.9% GT+ and 75-93% GT- rates of existing in-silico methods, 58.8% GT+ and 79% GT- rates of Ames method, and the estimated percentages of 23% in vivo and 31-33% in vitro GT+ compounds in the "universe of chemicals". There is a substantial level of agreement between H-SVM and L-SVM predicted GT+ and GT- MDDR compounds and the prediction from TOPKAT. SVM showed good potential in identifying GT+ compounds from large compound libraries based on higher diversity and higher noise training data.

  4. Size Matters: Economies of Scale in Schools and Colleges. Research Report

    Science.gov (United States)

    Owen, Glyn; Fletcher, Mick; Lester, Stan

    2006-01-01

    This report reviews the relationship in England between institutional size and the cost of Level 3 (mainly A-level) provision in three major settings: sixth form colleges (SFCs), general further education colleges (GFECs) and school sixth forms (SSFs). The study models how institutions might behave, given the funding regime and cost structures. It…

  5. Size effect of anaerobic granular sludge on biogas production: A micro scale study.

    Science.gov (United States)

    Wu, Jing; Afridi, Zohaib Ur Rehman; Cao, Zhi Ping; Zhang, Zhong Liang; Poncin, Souhila; Li, Huai Zhi; Zuo, Jian E; Wang, Kai Jun

    2016-02-01

    This study investigated the influence of anaerobic granular sludge size on its bioactivity at COD concentration of 1000, 3000 and 6000 mg/L. Based on size, granules were categorized as large (3-3.5 mm), medium (1.5-2 mm) and small (0.5-1 mm). A positive relationship was obtained between granule size and biogas production rate. For instance, at COD 6000 mg/L, large granules had highest biogas production rate of 0.031 m(3)/kgVSS/d while medium and small granules had 0.016 and 0.006 m(3)/kgVSS/d respectively. The results were reaffirmed by applying modified Fick's law of diffusion. Diffusion rates of substrate for large, medium and small granules were 1.67×10(-3), 6.1×10(-4)and 1.8×10(-4) mg/s respectively at that COD. Large granules were highly bio-active due to their internal structure, i.e. big pore size, high porosity and short diffusion distance as compared to medium and small granules, thus large granules could improve the performance of reactor. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Size Scaling in Western North Atlantic Loggerhead Turtles Permits Extrapolation between Regions, but Not Life Stages.

    Directory of Open Access Journals (Sweden)

    Nina Marn

    Full Text Available Sea turtles face threats globally and are protected by national and international laws. Allometry and scaling models greatly aid sea turtle conservation and research, and help to better understand the biology of sea turtles. Scaling, however, may differ between regions and/or life stages. We analyze differences between (i two different regional subsets and (ii three different life stage subsets of the western North Atlantic loggerhead turtles by comparing the relative growth of body width and depth in relation to body length, and discuss the implications.Results suggest that the differences between scaling relationships of different regional subsets are negligible, and models fitted on data from one region of the western North Atlantic can safely be used on data for the same life stage from another North Atlantic region. On the other hand, using models fitted on data for one life stage to describe other life stages is not recommended if accuracy is of paramount importance. In particular, young loggerhead turtles that have not recruited to neritic habitats should be studied and modeled separately whenever practical, while neritic juveniles and adults can be modeled together as one group. Even though morphometric scaling varies among life stages, a common model for all life stages can be used as a general description of scaling, and assuming isometric growth as a simplification is justified. In addition to linear models traditionally used for scaling on log-log axes, we test the performance of a saturating (curvilinear model. The saturating model is statistically preferred in some cases, but the accuracy gained by the saturating model is marginal.

  7. LHC Report: machine development

    CERN Multimedia

    Rogelio Tomás García for the LHC team

    2015-01-01

    Machine development weeks are carefully planned in the LHC operation schedule to optimise and further study the performance of the machine. The first machine development session of Run 2 ended on Saturday, 25 July. Despite various hiccoughs, it allowed the operators to make great strides towards improving the long-term performance of the LHC.   The main goals of this first machine development (MD) week were to determine the minimum beam-spot size at the interaction points given existing optics and collimation constraints; to test new beam instrumentation; to evaluate the effectiveness of performing part of the beam-squeezing process during the energy ramp; and to explore the limits on the number of protons per bunch arising from the electromagnetic interactions with the accelerator environment and the other beam. Unfortunately, a series of events reduced the machine availability for studies to about 50%. The most critical issue was the recurrent trip of a sextupolar corrector circuit –...

  8. Anisotropic finite-size scaling of an elastic string at the depinning threshold in a random-periodic medium

    Directory of Open Access Journals (Sweden)

    Sebastián Bustingorry

    2010-02-01

    Full Text Available We numerically study the geometry of a driven elastic string at its sample-dependent depinning threshold in random-periodic media. We find that the anisotropic finite-size scaling of the average square width $overline{w^2}$ and of its associated probability distribution are both controlled by the ratio $k=M/L^{zeta_{dep}}$, where $zeta_{dep}$ is the random-manifold depinning roughness exponent, $L$ is the longitudinal size of the string and $M$ the transverse periodicity of the random medium. The rescaled average square width $overline{w^2}/L^{2zeta_{dep}}$ displays a non-trivial single minimum for a finite value of $k$. We show that the initial decrease for small $k$ reflects the crossover at $k sim 1$ from the random-periodic to the random-manifold roughness. The increase for very large $k$ implies that the increasingly rare critical configurations, accompanying the crossover to Gumbel critical-force statistics, display anomalous roughness properties: a transverse-periodicity scaling in spite that $overline{w^2} ll M$, and subleading corrections to the standard random-manifold longitudinal-size scaling. Our results are relevant tounderstanding the dimensional crossover from interface to particle depinning. Received: 20 October 2010, Accepted: 1 December 2010; Edited by:  A. Vindigni; Reviewed by: A. A. Fedorenko, CNRS-Lab. de Physique, ENS de Lyon, France; DOI: 10.4279/PIP.020008

  9. An interpretation of size-scale plasticity in geometrically confined systems.

    Science.gov (United States)

    Espinosa, H D; Berbenni, S; Panico, M; Schwarz, K W

    2005-11-22

    The mesoscopic constitutive behavior of face-centered cubic metals as a function of the system characteristic dimension recently has been investigated experimentally. Strong size effects have been identified in both polycrystalline submicron thin films and single crystal micro pillars. The size effect is manifested as an increase in strength and hardening rate as the system dimensions are decreased. In this article, we provide a mechanistic interpretation for the observed mesoscopic behavior. By performing 3D discrete dislocation dynamics simulations of grains representative of the system microstructure and associated characteristic dimensions, we show that the experimentally observed size effects can be qualitatively described. In these simulations, a constant density of dislocation sources per unit of grain boundary area is modeled by sources randomly distributed at grain boundaries. The source length (strength) is modeled by a Gaussian distribution, in which average and standard deviation is independent of the system characteristic dimension. The simulations reveal that two key concepts are at the root of the observed plasticity size effect. First, the onset of plasticity is governed by a dislocation nucleation-controlled process (sources of various length, i.e., strengths, in our model). Second, the hardening rate is controlled by source exhaustion, i.e., sources are active only once as a result of the limited dislocation mobility arising from size and boundary effects. The model postulated here improves our understanding of why "smaller is stronger" and provides predictive capabilities that should enhance the reliable design of devices in applications such as microelectronics and micro/nano-electro-mechanical systems.

  10. A scaling theory for the size distribution of emitted dust aerosols suggests climate models underestimate the size of the global dust cycle.

    Science.gov (United States)

    Kok, Jasper F

    2011-01-18

    Mineral dust aerosols impact Earth's radiation budget through interactions with clouds, ecosystems, and radiation, which constitutes a substantial uncertainty in understanding past and predicting future climate changes. One of the causes of this large uncertainty is that the size distribution of emitted dust aerosols is poorly understood. The present study shows that regional and global circulation models (GCMs) overestimate the emitted fraction of clay aerosols (climate predictions in dusty regions. On a global scale, the dust cycle in most GCMs is tuned to match radiative measurements, such that the overestimation of the radiative cooling of a given quantity of emitted dust has likely caused GCMs to underestimate the global dust emission rate. This implies that the deposition flux of dust and its fertilizing effects on ecosystems may be substantially larger than thought.

  11. A simulation study provided sample size guidance for differential item functioning (DIF) studies using short scales

    DEFF Research Database (Denmark)

    Scott, Neil W; Fayers, Peter M; Aaronson, Neil K;

    2009-01-01

    Differential item functioning (DIF) analyses are increasingly used to evaluate health-related quality of life (HRQoL) instruments, which often include relatively short subscales. Computer simulations were used to explore how various factors including scale length affect analysis of DIF by ordinal...

  12. Prediction of spatially variable unsaturated hydraulic conductivity using scaled particle-size distribution functions

    NARCIS (Netherlands)

    Nasta, P.; Romano, N.; Assouline, S; Vrugt, J.A.; Hopmans, J.W.

    2013-01-01

    Simultaneous scaling of soil water retention and hydraulic conductivity functions provides an effective means to characterize the heterogeneity and spatial variability of soil hydraulic properties in a given study area. The statistical significance of this approach largely depends on the number of s

  13. The PhytoSCALE project: calibrating phytoplankton cell size as a proxy for climatic adaptation

    Science.gov (United States)

    Henderiks, Jorijntje; Gerecht, Andrea; Hannisdal, Bjarte; Liow, Lee Hsiang; Reitan, Trond; Schweder, Tore; Edvardsen, Bente

    2013-04-01

    The Cenozoic fossil record reveals that coccolithophores (marine unicellular haptophyte algae) were globally more common and widespread, larger, and more heavily calcified before 34 million years ago (Ma), in a high-CO2 greenhouse world. We have recently demonstrated that changes in atmospheric CO2 have, directly or indirectly, exerted an important long-term control on the ecological prominence of coccolithophores as a whole [1]. On closer inspection, this macroevolutionary pattern primarily reflects the decline in abundance and subsequent extinction of large-celled and heavily calcified lineages, while small-sized species appear to have been more successful in adapting to the post-34 Ma "icehouse" world. Coccolith size (length) is a proxy for cellular volume-to-surface ratios (V:SA), as determined from fossil coccosphere geometries. Algal V:SA provides physiological constraints on carbon acquisition and other resource uptake rates, affecting both photosynthesis and calcification, and is therefore considered to be a key indicator of adaptation. As a general rule, small cells have faster growth rates than large cells under similar environmental conditions, giving small species a competitive advantage when resources become limiting. Our research aims to bridge the gap between short-term experimental observations of physiological and phenotypic plasticity in the modern species Emiliania huxleyi and Coccolithus pelagicus, and time series of the long-term phenotypic variability of their Cenozoic ancestors. Single-clone growth experiments revealed significant plasticity in cell size and coccolith volume under growth-limiting conditions. However, the range in coccolith size (length) remained relatively constant for single genotypes between various growth conditions. With these new data we test to what extent the size variation observed in the fossil time series is a reflection of anagenetic changes (i.e. evolution of an ancestral species to a descendant species without

  14. Size effects of nano-scale pinning centers on the superconducting properties of YBCO single grains

    Science.gov (United States)

    Moutalbi, Nahed; Noudem, Jacques G.; M'chirgui, Ali

    2014-08-01

    High pinning superconductors are the most promising materials for power engineering. Their superconducting properties are governed by the microstructure quality and the vortex pinning behavior. We report on a study of the vortex pinning in YBa2Cu3O7-x (YBCO) single grain with defects induced through the addition of insulating nano-particles. In order to improve the critical current density, YBCO textured bulk superconductors were elaborated using the Top Seeded Melt Texture and Growth process with different addition amounts of Al2O3 nano-particles. Serving as strong pinning centers, 0.05% excess of Al2O3 causes a significant enhancement of the critical current density Jc under self field and in magnetic fields at 77 K. The enhanced flux pinning achieved with the low level of alumina nano-particles endorses the effectiveness of insulating nano-inclusions to induce effectives pinning sites within the superconducting matrix. On the other side, we focused on the effect of the size of pinning centers on the critical current density. This work was carried out using two batches of alumina nano-particles characterized by two different particle size distributions with mean diameters PSD1 = 20 nm and PSD2 = 2.27 μm. The matching effects of the observed pinning force density have been compared. The obtained results have shown that the flux pinning is closely dependent on the size of the artificial pinning centers. Our results suggest that the optimization of the size of the artificial pinning centers is crucial to a much better understanding of the pinning mechanisms and therefore to insure high superconducting performance for the practical application of superconducting materials.

  15. Scaling lower-limb isokinetic strength for biological maturation and body size in adolescent basketball players.

    Science.gov (United States)

    Carvalho, Humberto Moreira; Coelho-e-Silva, Manuel; Valente-dos-Santos, João; Gonçalves, Rui Soles; Philippaerts, Renaat; Malina, Robert

    2012-08-01

    The relationships between knee joint isokinetic strength, biological maturity status and body size were examined in 14-16-year-old basketball players, considering proportional allometric modeling. Biological maturity status was assessed with maturity offset protocol. Stature, body mass, sitting height, and estimated thigh volume were measured by anthropometry. Maximal moments of force of concentric and eccentric muscular actions for the knee extensors and flexors were assessed by isokinetic dynamometry at 60° s(-1). Regression analysis revealed a linear relation between maximal moments of force of the knee extensors in both muscular actions and knee flexors in concentric actions were moderately high (0.55 ≤ r ≤ 0.64). As for knee flexors in eccentric actions, a squared term of maturity indicator was significant indicating that the relationship with maturity offset tended to plateau approximately 2 years after PHV. Incorporating maturity indicator term with body size term (body mass or thigh volume) in the allometric models revealed that the size exponents for both body mass and thigh volume were reduced compared with simple allometric modeling. The results indicate a significant inter-individual variation in lower-limb isokinetic strength performance at 60° s(-1) in concentric and eccentric muscular actions in late adolescent basketball players. The variability in performance is related to inter-individual variation in estimated time before or after peak height velocity, as well as differences in body size. Proportional allometric models indicate that the influence of estimated time from age at peak height velocity on isokinetic strength performance is mostly mediated by corresponding changes in overall body mass.

  16. Scaling of xylem and phloem transport capacity and resource usage with tree size

    OpenAIRE

    Hölttä, Teemu; Kurppa, Miika; Nikinmaa, Eero

    2013-01-01

    Xylem and phloem need to maintain steady transport rates of water and carbohydrates to match the exchange rates of these compounds at the leaves. A major proportion of the carbon and nitrogen assimilated by a tree is allocated to the construction and maintenance of the xylem and phloem long distance transport tissues. This proportion can be expected to increase with increasing tree size due to the growing transport distances between the assimilating tissues, i.e., leaves and fine roots, at th...

  17. Pricing and Capacity Sizing for Systems with Shared Resources: Approximate Solutions and Scaling Relations

    OpenAIRE

    Constantinos Maglaras; Assaf Zeevi

    2003-01-01

    This paper considers pricing and capacity sizing decisions, in a single-class Markovian model motivated by communication and information services. The service provider is assumed to operate a finite set of processing resources that can besharedamong users; however, this shared mode of operation results in a service-rate degradation. Users, in turn, are sensitive to the delay implied by the potential degradation in service rate, and to the usage fee charged for accessing the system. We study t...

  18. Machine Translation

    Institute of Scientific and Technical Information of China (English)

    张严心

    2015-01-01

    As a kind of ancillary translation tool, Machine Translation has been paid increasing attention to and received different kinds of study by a great deal of researchers and scholars for a long time. To know the definition of Machine Translation and to analyse its benefits and problems are significant for translators in order to make good use of Machine Translation, and helpful to develop and consummate Machine Translation Systems in the future.

  19. Sustainable machining

    CERN Document Server

    2017-01-01

    This book provides an overview on current sustainable machining. Its chapters cover the concept in economic, social and environmental dimensions. It provides the reader with proper ways to handle several pollutants produced during the machining process. The book is useful on both undergraduate and postgraduate levels and it is of interest to all those working with manufacturing and machining technology.

  20. Self-consistent field theory based molecular dynamics with linear system-size scaling.

    Science.gov (United States)

    Richters, Dorothee; Kühne, Thomas D

    2014-04-01

    We present an improved field-theoretic approach to the grand-canonical potential suitable for linear scaling molecular dynamics simulations using forces from self-consistent electronic structure calculations. It is based on an exact decomposition of the grand canonical potential for independent fermions and does neither rely on the ability to localize the orbitals nor that the Hamilton operator is well-conditioned. Hence, this scheme enables highly accurate all-electron linear scaling calculations even for metallic systems. The inherent energy drift of Born-Oppenheimer molecular dynamics simulations, arising from an incomplete convergence of the self-consistent field cycle, is circumvented by means of a properly modified Langevin equation. The predictive power of the present approach is illustrated using the example of liquid methane under extreme conditions.

  1. Self-consistent field theory based molecular dynamics with linear system-size scaling

    Energy Technology Data Exchange (ETDEWEB)

    Richters, Dorothee [Institute of Mathematics and Center for Computational Sciences, Johannes Gutenberg University Mainz, Staudinger Weg 9, D-55128 Mainz (Germany); Kühne, Thomas D., E-mail: kuehne@uni-mainz.de [Institute of Physical Chemistry and Center for Computational Sciences, Johannes Gutenberg University Mainz, Staudinger Weg 7, D-55128 Mainz (Germany); Technical and Macromolecular Chemistry, University of Paderborn, Warburger Str. 100, D-33098 Paderborn (Germany)

    2014-04-07

    We present an improved field-theoretic approach to the grand-canonical potential suitable for linear scaling molecular dynamics simulations using forces from self-consistent electronic structure calculations. It is based on an exact decomposition of the grand canonical potential for independent fermions and does neither rely on the ability to localize the orbitals nor that the Hamilton operator is well-conditioned. Hence, this scheme enables highly accurate all-electron linear scaling calculations even for metallic systems. The inherent energy drift of Born-Oppenheimer molecular dynamics simulations, arising from an incomplete convergence of the self-consistent field cycle, is circumvented by means of a properly modified Langevin equation. The predictive power of the present approach is illustrated using the example of liquid methane under extreme conditions.

  2. Spatiotemporal Chaos in Large Systems The Scaling of Complexity with Size

    CERN Document Server

    Greenside, H S

    1996-01-01

    The dynamics of a nonequilibrium system can become complex because the system has many components (e.g., a human brain), because the system is strongly driven from equilibrium (e.g., large Reynolds-number flows), or because the system becomes large compared to certain intrinsic length scales. Recent experimental and theoretical work is reviewed that addresses this last route to complexity. In the idealized case of a sufficiently large, nontransient, homogeneous, and chaotic system, the fractal dimension D becomes proportional to the system's volume V which defines the regime of extensive chaos. The extensivity of the fractal dimension suggests a new way to characterize correlations in high-dimensional systems in terms of an intensive dimension correlation length $\\xi_\\delta$. Recent calculations at Duke University show that $\\xi_\\delta$ is a length scale smaller than and independent of some commonly used measures of disorder such as the two-point and mutual-information correlation lengths. Identifying the bas...

  3. Large-scale genotyping identifies a new locus at 22q13.2 associated with female breast size.

    Science.gov (United States)

    Li, Jingmei; Foo, Jia Nee; Schoof, Nils; Varghese, Jajini S; Fernandez-Navarro, Pablo; Gierach, Gretchen L; Quek, Swee Tian; Hartman, Mikael; Nord, Silje; Kristensen, Vessela N; Pollán, Marina; Figueroa, Jonine D; Thompson, Deborah J; Li, Yi; Khor, Chiea Chuen; Humphreys, Keith; Liu, Jianjun; Czene, Kamila; Hall, Per

    2013-10-01

    Individual differences in breast size are a conspicuous feature of variation in human females and have been associated with fecundity and advantage in selection of mates. To identify common variants that are associated with breast size, we conducted a large-scale genotyping association meta-analysis in 7169 women of European descent across three independent sample collections with digital or screen film mammograms. The samples consisted of the Swedish KARMA, LIBRO-1 and SASBAC studies genotyped on iCOGS, a custom illumina iSelect genotyping array comprising of 211 155 single nucleotide polymorphisms (SNPs) designed for replication and fine mapping of common and rare variants with relevance to breast, ovary and prostate cancer. Breast size of each subject was ascertained by measuring total breast area (mm(2)) on a mammogram. We confirm genome-wide significant associations at 8p11.23 (rs10086016, p=1.3×10(-14)) and report a new locus at 22q13 (rs5995871, p=3.2×10(-8)). The latter region contains the MKL1 gene, which has been shown to impact endogenous oestrogen receptor α transcriptional activity and is recruited on oestradiol sensitive genes. We also replicated previous genome-wide association study findings for breast size at four other loci. A new locus at 22q13 may be associated with female breast size.

  4. A multi-scale PDMS fabrication strategy to bridge the size mismatch between integrated circuits and microfluidics.

    Science.gov (United States)

    Muluneh, Melaku; Issadore, David

    2014-12-07

    In recent years there has been great progress harnessing the small-feature size and programmability of integrated circuits (ICs) for biological applications, by building microfluidics directly on top of ICs. However, a major hurdle to the further development of this technology is the inherent size-mismatch between ICs (~mm) and microfluidic chips (~cm). Increasing the area of the ICs to match the size of the microfluidic chip, as has often been done in previous studies, leads to a waste of valuable space on the IC and an increase in fabrication cost (>100×). To address this challenge, we have developed a three dimensional PDMS chip that can straddle multiple length scales of hybrid IC/microfluidic chips. This approach allows millimeter-scale ICs, with no post-processing, to be integrated into a centimeter-sized PDMS chip. To fabricate this PDMS chip we use a combination of soft-lithography and laser micromachining. Soft lithography was used to define micrometer-scale fluid channels directly on the surface of the IC, allowing fluid to be controlled with high accuracy and brought into close proximity to sensors for highly sensitive measurements. Laser micromachining was used to create ~50 μm vias to connect these molded PDMS channels to a larger PDMS chip, which can connect multiple ICs and house fluid connections to the outside world. To demonstrate the utility of this approach, we built and demonstrated an in-flow magnetic cytometer that consisted of a 5 × 5 cm(2) microfluidic chip that incorporated a commercial 565 × 1145 μm(2) IC with a GMR sensing circuit. We additionally demonstrated the modularity of this approach by building a chip that incorporated two of these GMR chips connected in series.

  5. A multi-scale PDMS fabrication strategy to bridge the size mismatch between integrated circuits and microfluidics†

    Science.gov (United States)

    Muluneh, Melaku

    2015-01-01

    In recent years there has been great progress harnessing the small-feature size and programmability of integrated circuits (ICs) for biological applications, by building microfluidics directly on top of ICs. However, a major hurdle to the further development of this technology is the inherent size-mismatch between ICs (~mm) and microfluidic chips (~cm). Increasing the area of the ICs to match the size of the microfluidic chip, as has often been done in previous studies, leads to a waste of valuable space on the IC and an increase in fabrication cost (>100×). To address this challenge, we have developed a three dimensional PDMS chip that can straddle multiple length scales of hybrid IC/microfluidic chips. This approach allows millimeter-scale ICs, with no post-processing, to be integrated into a centimeter-sized PDMS chip. To fabricate this PDMS chip we use a combination of soft-lithography and laser micromachining. Soft lithography was used to define micrometer-scale fluid channels directly on the surface of the IC, allowing fluid to be controlled with high accuracy and brought into close proximity to sensors for highly sensitive measurements. Laser micromachining was used to create ~50 μm vias to connect these molded PDMS channels to a larger PDMS chip, which can connect multiple ICs and house fluid connections to the outside world. To demonstrate the utility of this approach, we built and demonstrated an in-flow magnetic cytometer that consisted of a 5 × 5 cm2 microfluidic chip that incorporated a commercial 565 × 1145 μm2 IC with a GMR sensing circuit. We additionally demonstrated the modularity of this approach by building a chip that incorporated two of these GMR chips connected in series. PMID:25284502

  6. Removal performance and water quality analysis of paper machine white water in a full-scale wastewater treatment plant.

    Science.gov (United States)

    Shi, Shuai; Wang, Can; Fang, Shuai; Jia, Minghao; Li, Xiaoguang

    2016-09-29

    Paper machine white water is generally characterized as a high concentration of suspended solids and organic matters. A combined physicochemical-biological and filtration process was used in the study for removing pollutants in the wastewater. The removal efficiency of the pollutant in physicochemical and biological process was evaluated, respectively. Furthermore, advanced technology was used to analyse the water quality before and after the process treatment. Experimental results showed that the removal efficiency of suspend solids (SS) of the system was above 99%, while the physicochemical treatment in the forepart of the system had achieved about 97%. The removal efficiency of chemical oxygen demand (COD) and colour had the similar trend after physicochemical treatment and were corresponding to the proportion of suspended and the near-colloidal organic matter in the wastewater. After biological treatment, the removal efficiency of COD and colour achieved were about 97% and 90%, respectively. Furthermore, molecular weight (MW) distribution analysis showed that after treatment low MW molecules (chromatography/mass spectrometry showed that the composition of organic matter in the wastewater was not complicated. Methylsiloxanes were the typical organic components in the raw wastewater and most of them were removed after treatment.

  7. A Mathematical Model of the Color Preference Scale Construction in Quality Management at the Machine-Building Enterprise

    Science.gov (United States)

    Averchenkov, V. I.; Kondratenko, S. V.; Potapov, L. A.; Spasennikov, V. V.

    2017-01-01

    In this article, the author consider the basic features of color preferences. The famous scientists’ works confirm their identity and independence of subjective factors. The article examines the method of constructing the respondent’s color preference individual scale on the basis of L Thurstone’s pair election method. The practical example of applying this technique for constructing the respondent’s color preference individual scale is given. The result of this method application is the color preference individual scale with the weight value of each color. The authors also developed and presented the algorithm of applying this method within the program complex to determine the respondents’ attitude to the issues under investigation based on their color preferences. Also, the article considers the possibility of using the software at the industrial enterprises to improve the quality of the consumer quality products.

  8. Scalable electron correlation methods I.: PNO-LMP2 with linear scaling in the molecular size and near-inverse-linear scaling in the number of processors.

    Science.gov (United States)

    Werner, Hans-Joachim; Knizia, Gerald; Krause, Christine; Schwilk, Max; Dornbach, Mark

    2015-02-10

    We propose to construct electron correlation methods that are scalable in both molecule size and aggregated parallel computational power, in the sense that the total elapsed time of a calculation becomes nearly independent of the molecular size when the number of processors grows linearly with the molecular size. This is shown to be possible by exploiting a combination of local approximations and parallel algorithms. The concept is demonstrated with a linear scaling pair natural orbital local second-order Møller-Plesset perturbation theory (PNO-LMP2) method. In this method, both the wave function manifold and the integrals are transformed incrementally from projected atomic orbitals (PAOs) first to orbital-specific virtuals (OSVs) and finally to pair natural orbitals (PNOs), which allow for minimum domain sizes and fine-grained accuracy control using very few parameters. A parallel algorithm design is discussed, which is efficient for both small and large molecules, and numbers of processors, although true inverse-linear scaling with compute power is not yet reached in all cases. Initial applications to reactions involving large molecules reveal surprisingly large effects of dispersion energy contributions as well as large intramolecular basis set superposition errors in canonical MP2 calculations. In order to account for the dispersion effects, the usual selection of PNOs on the basis of natural occupation numbers turns out to be insufficient, and a new energy-based criterion is proposed. If explicitly correlated (F12) terms are included, fast convergence to the MP2 complete basis set (CBS) limit is achieved. For the studied reactions, the PNO-LMP2-F12 results deviate from the canonical MP2/CBS and MP2-F12 values by <1 kJ mol(-1), using triple-ζ (VTZ-F12) basis sets.

  9. Effects of foreground scale in texture discrimination tasks: performance is size, shape, and content specific.

    Science.gov (United States)

    Rubenstein, B S; Sagi, D

    1993-01-01

    Textural gradients can be defined as differences across space in orientation and spatial frequency content, along with absolute luminance and contrast. In this study, stimuli were created with gradients of these types to see how changing the size and shape of the foreground region affects the psychophysical task. The foreground regions were designed as clusters of target texels alternating with interplaced background texels (of the same cluster size). This design gives rise to a texture square-wave, with texture frequency defined by the distance from the beginning of one target cluster to the next. It was found that for stimuli with vertical and horizontal Gabor patches, the relationship between the global and local orientation of the foreground region is a critical variable, indicating some global-local interaction. When the global orientation of the foreground region is orthogonal to local target texel orientation, visibility is optimal for high texture frequencies, while for parallel arrangements, low texture frequencies are most visible. The latter result was also found to a lesser degree for tasks involving contrast gradients as well as spatial-frequency gradients, but with no effect caused by varying the global orientation. The results indicate the existence of a second-stage filter that integrates (across space) responses of similar first-stage spatial filters, and then sums the resultant activities with those of orthogonal first-stage filters, whose spatial proximity are to the sides of the local orientation. The size of these integrating mechanisms may extend to more than 7 deg, with connections between smoothed activities of filters with orthogonal orientations spanning approximately 1-2 deg.

  10. Mapping Savanna Tree Species at Ecosystem Scales Using Support Vector Machine Classification and BRDF Correction on Airborne Hyperspectral and LiDAR Data

    Directory of Open Access Journals (Sweden)

    Gregory P. Asner

    2012-11-01

    Full Text Available Mapping the spatial distribution of plant species in savannas provides insight into the roles of competition, fire, herbivory, soils and climate in maintaining the biodiversity of these ecosystems. This study focuses on the challenges facing large-scale species mapping using a fusion of Light Detection and Ranging (LiDAR and hyperspectral imagery. Here we build upon previous work on airborne species detection by using a two-stage support vector machine (SVM classifier to first predict species from hyperspectral data at the pixel scale. Tree crowns are segmented from the lidar imagery such that crown-level information, such as maximum tree height, can then be combined with the pixel-level species probabilities to predict the species of each tree. An overall prediction accuracy of 76% was achieved for 15 species. We also show that bidirectional reflectance distribution (BRDF effects caused by anisotropic scattering properties of savanna vegetation can result in flight line artifacts evident in species probability maps, yet these can be largely mitigated by applying a semi-empirical BRDF model to the hyperspectral data. We find that confronting these three challenges—reflectance anisotropy, integration of pixel- and crown-level data, and crown delineation over large areas—enables species mapping at ecosystem scales for monitoring biodiversity and ecosystem function.

  11. Integrating temporal and spatial scales: Human structural network motifs across age and region-of-interest size

    CERN Document Server

    Echtermeyer, Christoph; Rotarska-Jagiela, Anna; Mohr, Harald; Uhlhaas, Peter J; Kaiser, Marcus

    2011-01-01

    Human brain networks can be characterized at different temporal or spatial scales given by the age of the subject or the spatial resolution of the neuroimaging method. Integration of data across scales can only be successful if the combined networks show a similar architecture. One way to compare networks is to look at spatial features, based on fibre length, and topological features of individual nodes where outlier nodes form single node motifs whose frequency yields a fingerprint of the network. Here, we observe how characteristic single node motifs change over age (12-23 years) and network size (414, 813, and 1615 nodes) for diffusion tensor imaging (DTI) structural connectivity in healthy human subjects. First, we find the number and diversity of motifs in a network to be strongly correlated. Second, comparing different scales, the number and diversity of motifs varied across the temporal (subject age) and spatial (network resolution) scale: certain motifs might only occur at one spatial scale or for a c...

  12. Development of an Electrically Operated Cassava Peeling and Slicing Machine

    Directory of Open Access Journals (Sweden)

    I. S. Aji

    2017-08-01

    Full Text Available The development and construction of an electrically operated cassava peeling and slicing machine was described in this paper. The objective was to design, construct and test an electrically operated machine that will peel and slice cassava root into chips, to aid the processes of drying, pelletizing and storage. The methodology adopted includes; design, construction, calculation, specification, assembly of component parts and performance test. The machine was able to Peel and slice cassava to fairly similar sizes. Performance test reveals that 7 kg of cassava tuber was peeled and chipped in one minute, which shows that, the machine developed can significantly reduce the cost of labour and time wastage associated with traditional processing of cassava tubers into dried cassava pellets, and finished products, such as; garri, and cassava flour. The machine has a capacity of 6.72 kg/min, with peeling and chipping efficiency of 66.2% and 84.0% respectively. The flesh loss of the peeled tuber was 8.52%, while overall machine efficiency obtained as 82.4%. The machine is recommended for use by small scale industries and by cassava farmers in the rural areas. It has an overall cost of N46100 ($150. The machine can easily be operated by an individual and maintained, by using warm water to wash the component parts, and sharpening of the chipping disc when required.

  13. Scattering pulse of label free fine structure cells to determine the size scale of scattering structures

    Science.gov (United States)

    Zhang, Lu; Chen, Xingyu; Zhang, Zhenxi; Chen, Wei; Zhao, Hong; Zhao, Xin; Li, Kaixing; Yuan, Li

    2016-04-01

    Scattering pulse is sensitive to the morphology and components of each single label-free cell. The most direct detection result, label free cell's scattering pulse is studied in this paper as a novel trait to recognize large malignant cells from small normal cells. A set of intrinsic scattering pulse calculation method is figured out, which combines both hydraulic focusing theory and small particle's scattering principle. Based on the scattering detection angle ranges of widely used flow cytometry, the scattering pulses formed by cell scattering energy in forward scattering angle 2°-5° and side scattering angle 80°-110° are discussed. Combining the analysis of cell's illuminating light energy, the peak, area, and full width at half maximum (FWHM) of label free cells' scattering pulses for fine structure cells with diameter 1-20 μm are studied to extract the interrelations of scattering pulse's features and cell's morphology. The theoretical and experimental results show that cell's diameter and FWHM of its scattering pulse agree with approximate linear distribution; the peak and area of scattering pulse do not always increase with cell's diameter becoming larger, but when cell's diameter is less than about 16 μm the monotone increasing relation of scattering pulse peak or area with cell's diameter can be obtained. This relationship between the features of scattering pulse and cell's size is potentially a useful but very simple criterion to distinguishing malignant and normal cells by their sizes and morphologies in label free cells clinical examinations.

  14. Scaling laws of strategic behavior and size heterogeneity in agent dynamics

    Science.gov (United States)

    Vaglica, Gabriella; Lillo, Fabrizio; Moro, Esteban; Mantegna, Rosario N.

    2008-03-01

    We consider the financial market as a model system and study empirically how agents strategically adjust the properties of large orders in order to meet their preference and minimize their impact. We quantify this strategic behavior by detecting scaling relations between the variables characterizing the trading activity of different institutions. We also observe power-law distributions in the investment time horizon, in the number of transactions needed to execute a large order, and in the traded value exchanged by large institutions, and we show that heterogeneity of agents is a key ingredient for the emergence of some aggregate properties characterizing this complex system.

  15. Finite size scaling and first-order phase transition in a modified XY model

    Science.gov (United States)

    Sinha, Suman; Roy, Soumen Kumar

    2010-02-01

    Monte Carlo simulation has been performed in a two-dimensional modified XY -model first proposed by Domany [Phys. Rev. Lett. 52, 1535 (1984)] The cluster algorithm of Wolff has been used and multiple histogram reweighting is performed. The first-order scaling behavior of the quantities such as specific heat and free-energy barrier are found to be obeyed accurately. While the lowest-order correlation function was found to decay to zero at long distance just above the transition, the next-higher-order correlation function shows a nonzero plateau.

  16. Micro powder injection molding——large scale production technology for micro-sized components

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Micro powder injection molding (μPIM),a miniaturized variant of powder injection molding,has advantages of shape complexity,applicability to many materials and good mechanical properties. Co-injection molding has been realized between met-als and ceramics on micro components,which become the first breakthrough within the PIM field. Combined with the prominent characteristics of high features/cost ratio,micro powder injection molding becomes a potential technique for large scale production of intricate and three-dimensional micro components or micro-structured components in microsystems technology (MST) field.

  17. Micro powder injection molding-large scale production technology for micro-sized components

    Institute of Scientific and Technical Information of China (English)

    YIN HaiQing; JIA ChengChang; QU XuanHui

    2008-01-01

    Micro powder injection molding (μPIM), a miniaturized variant of powder injection molding, has advantages of shape complexity, applicability to many materials and good mechanical properties. Co-injection molding has been realized between metals and ceramics on micro components, which become the first breakthrough within the PIM field. Combined with the prominent characteristics of high features/cost ratio, micro powder injection molding becomes a potential technique for large scale production of intricate and three-dimensional micro components or microstructured components in microsystems technology (MST) field.

  18. Toward industrial scale synthesis of ultrapure singlet nanoparticles with controllable sizes in a continuous gas-phase process

    Science.gov (United States)

    Feng, Jicheng; Biskos, George; Schmidt-Ott, Andreas

    2015-10-01

    Continuous gas-phase synthesis of nanoparticles is associated with rapid agglomeration, which can be a limiting factor for numerous applications. In this report, we challenge this paradigm by providing experimental evidence to support that gas-phase methods can be used to produce ultrapure non-agglomerated “singlet” nanoparticles having tunable sizes at room temperature. By controlling the temperature in the particle growth zone to guarantee complete coalescence of colliding entities, the size of singlets in principle can be regulated from that of single atoms to any desired value. We assess our results in the context of a simple analytical model to explore the dependence of singlet size on the operating conditions. Agreement of the model with experimental measurements shows that these methods can be effectively used for producing singlets that can be processed further by many alternative approaches. Combined with the capabilities of up-scaling and unlimited mixing that spark ablation enables, this study provides an easy-to-use concept for producing the key building blocks for low-cost industrial-scale nanofabrication of advanced materials.

  19. Toward industrial scale synthesis of ultrapure singlet nanoparticles with controllable sizes in a continuous gas-phase process.

    Science.gov (United States)

    Feng, Jicheng; Biskos, George; Schmidt-Ott, Andreas

    2015-10-29

    Continuous gas-phase synthesis of nanoparticles is associated with rapid agglomeration, which can be a limiting factor for numerous applications. In this report, we challenge this paradigm by providing experimental evidence to support that gas-phase methods can be used to produce ultrapure non-agglomerated "singlet" nanoparticles having tunable sizes at room temperature. By controlling the temperature in the particle growth zone to guarantee complete coalescence of colliding entities, the size of singlets in principle can be regulated from that of single atoms to any desired value. We assess our results in the context of a simple analytical model to explore the dependence of singlet size on the operating conditions. Agreement of the model with experimental measurements shows that these methods can be effectively used for producing singlets that can be processed further by many alternative approaches. Combined with the capabilities of up-scaling and unlimited mixing that spark ablation enables, this study provides an easy-to-use concept for producing the key building blocks for low-cost industrial-scale nanofabrication of advanced materials.

  20. Finite-size scaling as a tool in the search for the QCD critical point in heavy ion data

    CERN Document Server

    Fraga, Eduardo S; Sorensen, Paul

    2011-01-01

    Given the short lifetime and the reduced volume of the quark-gluon plasma (QGP) formed in high-energy heavy ion collisions, a possible critical endpoint (CEP) will be blurred in a region and the effects from criticality severely smoothened. Nevertheless, the non-monotonic behavior of correlation functions near criticality for systems of different sizes, given by different centralities in heavy ion collisions, must obey finite-size scaling. We apply the predicting power of scaling plots to the search for the CEP of strong interactions in heavy ion collisions using data from RHIC and SPS. The results of our data analysis exclude a critical point below chemical potentials $\\mu\\sim 450 $MeV. Extrapolating the analysis, we speculate that criticality could appear slightly above $\\mu\\sim 500 $MeV. Using available data we extrapolate our scaling curves to predict the behavior of new data at lower center-of-mass energy, currently being investigated in the Beam Energy Scan program at RHIC. If it turns out that the QGP ...

  1. Sizing the Jurassic theropod dinosaur Allosaurus: assessing growth strategy and evolution of ontogenetic scaling of limbs.

    Science.gov (United States)

    Bybee, Paul J; Lee, Andrew H; Lamm, Ellen-Thérèse

    2006-03-01

    Allosaurus is one of the most common Mesozoic theropod dinosaurs. We present a histological analysis to assess its growth strategy and ontogenetic limb bone scaling. Based on an ontogenetic series of humeral, ulnar, femoral, and tibial sections of fibrolamellar bone, we estimate the ages of the largest individuals in the sample to be between 13-19 years. Growth curve reconstruction suggests that maximum growth occurred at 15 years, when body mass increased 148 kg/year. Based on larger bones of Allosaurus, we estimate an upper age limit of between 22-28 years of age, which is similar to preliminary data for other large theropods. Both Model I and Model II regression analyses suggest that relative to the length of the femur, the lengths of the humerus, ulna, and tibia increase in length more slowly than isometry predicts. That pattern of limb scaling in Allosaurus is similar to those in other large theropods such as the tyrannosaurids. Phylogenetic optimization suggests that large theropods independently evolved reduced humeral, ulnar, and tibial lengths by a phyletic reduction in longitudinal growth relative to the femur.

  2. Release probability of hippocampal glutamatergic terminals scales with the size of the active zone.

    Science.gov (United States)

    Holderith, Noemi; Lorincz, Andrea; Katona, Gergely; Rózsa, Balázs; Kulik, Akos; Watanabe, Masahiko; Nusser, Zoltan

    2012-06-10

    Cortical synapses have structural, molecular and functional heterogeneity; our knowledge regarding the relationship between their ultrastructural and functional parameters is still fragmented. Here we asked how the neurotransmitter release probability and presynaptic [Ca(2+)] transients relate to the ultrastructure of rat hippocampal glutamatergic axon terminals. Two-photon Ca(2+) imaging-derived optical quantal analysis and correlated electron microscopic reconstructions revealed a tight correlation between the release probability and the active-zone area. Peak amplitude of [Ca(2+)] transients in single boutons also positively correlated with the active-zone area. Freeze-fracture immunogold labeling revealed that the voltage-gated calcium channel subunit Cav2.1 and the presynaptic protein Rim1/2 are confined to the active zone and their numbers scale linearly with the active-zone area. Gold particles labeling Cav2.1 were nonrandomly distributed in the active zones. Our results demonstrate that the numbers of several active-zone proteins, including presynaptic calcium channels, as well as the number of docked vesicles and the release probability, scale linearly with the active-zone area.

  3. Allometric scaling relationship between frequency of intestinal contraction and body size in rodents and rabbits

    Indian Academy of Sciences (India)

    Hossein-Ali Arab; Samad Muhammadnejad; Saeideh Naeimi; Attieh Arab

    2013-06-01

    This study aimed to establish an allometric scaling relationship between the frequency of intestinal contractions and body mass of different mammalian species. The frequency of intestinal contractions of rabbit, guinea pig, rat and mouse were measured using an isolated organ system. The isolated rings were prepared from proximal segments of jejunums and the frequency of contractions was recorded by an isometric force procedure. The coefficients of the obtained allometric equation were ascertained by computation of least squares after logarithmic transformation of both body mass and frequency. Significant differences ( <0.001) were shown in the frequency of contractions between different species. The highest frequency that corresponded to the mice was 57.7 min−1 and the 95% confidence interval (CI) ranged from 45.4 to 70, while rabbits showed the lowest frequency (12.71 min−1, CI: 8.6–16.8). Logarithms of frequency were statistically proportional to logarithms of body mass (r=0.99; < 0.001). The data fitted an equation = 18:51 -0.31 and the 95% confidence interval of the exponent ranged from −0.30 to −0.32. The results of this study suggest that it is probably possible to extrapolate the intestinal contraction frequency of other mammalian species by the means of allometry scaling.

  4. Validity of a figure rating scale assessing body size perception in school-age children.

    Science.gov (United States)

    Lombardo, Caterina; Battagliese, Gemma; Pezzuti, Lina; Lucidi, Fabio

    2014-01-01

    This study aimed to provide data concerning the validity of a short sequence of face valid pictorial stimuli assessing the perception of body size in school-age children. A sequence of gender and age-appropriate silhouettes was administered to 314 boys and girls aged 6-14 years. The self-evaluations provided by the children correlated significantly with their actual BMI corrected for age. Furthermore, the children's self-evaluations always significantly correlated with the evaluations provided by the three external observers; i.e., both parents and the interviewers. The results indicate that this sequence of pictorial stimuli, depicting realistic human forms appropriate for children, is a valid measure of children's body image. Relevant differences across age groups were also found, indicating that before the age of eight, the correlations between the children's self-evaluations and their BMI or the judgments of the three observers are lower than in the other age groups.

  5. Mechanobiological induction of long-range contractility by diffusing biomolecules and size scaling in cell assemblies

    Science.gov (United States)

    Dasbiswas, K.; Alster, E.; Safran, S. A.

    2016-06-01

    Mechanobiological studies of cell assemblies have generally focused on cells that are, in principle, identical. Here we predict theoretically the effect on cells in culture of locally introduced biochemical signals that diffuse and locally induce cytoskeletal contractility which is initially small. In steady-state, both the concentration profile of the signaling molecule as well as the contractility profile of the cell assembly are inhomogeneous, with a characteristic length that can be of the order of the system size. The long-range nature of this state originates in the elastic interactions of contractile cells (similar to long-range “macroscopic modes” in non-living elastic inclusions) and the non-linear diffusion of the signaling molecules, here termed mechanogens. We suggest model experiments on cell assemblies on substrates that can test the theory as a prelude to its applicability in embryo development where spatial gradients of morphogens initiate cellular development.

  6. Large-scale Hydrothermal Synthesis and Characterization of Size-controlled Lanthanum Hydroxide Nanorods

    Institute of Scientific and Technical Information of China (English)

    YI,Ran; ZHANG,Ning; SHI,Rongrong; LI,Yongbo; QIU Guanzhou; LIU Xiaohe

    2009-01-01

    Uniform lanthanum hydroxide nanorods were successfully synthesized in large quantities through a facile hydrothermal synthetic method, in which soluble lanthanum nitrate was used to supply the lanthanum source and triethylamine (TEA) was used as both alkaline agent and complexing agent. The influences of triethylamine amount, surfactant, reaction temperature and time on the size and shape of lanthanum hydroxide nanorods were investigated in detail. Trivalent rare earth ion doped lanthanum hydroxide nanorods were also obtained in this paper. The phase structures and morphologies of the as-prepared products were investigated in detail by X-ray diffraction (XRD), transmission electron microscopy (TEM), selected area electron diffraction (SAED) and high-resolution transmis-sion electron microscopy (HRTEM). The probable formation mechanism was proposed based on the experimental results.

  7. Renormalization-group theory for finite-size scaling in extreme statistics.

    Science.gov (United States)

    Györgyi, G; Moloney, N R; Ozogány, K; Rácz, Z; Droz, M

    2010-04-01

    We present a renormalization-group (RG) approach to explain universal features of extreme statistics applied here to independent identically distributed variables. The outlines of the theory have been described in a previous paper, the main result being that finite-size shape corrections to the limit distribution can be obtained from a linearization of the RG transformation near a fixed point, leading to the computation of stable perturbations as eigenfunctions. Here we show details of the RG theory which exhibit remarkable similarities to the RG known in statistical physics. Besides the fixed points explaining universality, and the least stable eigendirections accounting for convergence rates and shape corrections, the similarities include marginally stable perturbations which turn out to be generic for the Fisher-Tippett-Gumbel class. Distribution functions containing unstable perturbations are also considered. We find that, after a transitory divergence, they return to the universal fixed line at the same or at a different point depending on the type of perturbation.

  8. EMIC Wave Scale Size in the Inner Magnetosphere: Observations From the Dual Van Allen Probes

    Science.gov (United States)

    Blum, L. W.; Bonnell, J. W.; Agapitov, O.; Paulson, K.; Kletzing, C.

    2017-01-01

    Estimating the spatial scales of electromagnetic ion cyclotron (EMIC) waves is critical for quantifying their overall scattering efficiency and effects on thermal plasma, ring current, and radiation belt particles. Using measurements from the dual Van Allen Probes in 2013-2014, we characterize the spatial and temporal extents of regions of EMIC wave activity and how these depend on local time and radial distance within the inner magnetosphere. Observations are categorized into three types: waves observed by only one spacecraft, waves measured by both spacecraft simultaneously, and waves observed by both spacecraft with some time lag. Analysis reveals that dayside (and H+ band) EMIC waves more frequently span larger spatial areas, while nightside (and He+ band) waves are more often localized but can persist many hours. These investigations give insight into the nature of EMIC wave generation and support more accurate quantification of their effects on the ring current and outer radiation belt.

  9. Finite size scaling study of dynamical phase transitions in two dimensional models: ferromagnet, symmetric and non symmetric spin glasses

    Energy Technology Data Exchange (ETDEWEB)

    Neumann, A.U.; Derrida, B.

    1988-10-01

    We study the time evolution of two configurations submitted to the same thermal noise for several two dimensional models (Ising ferromagnet, symmetric spin glass, non symmetric spin glass). For all these models, we find a non zero critical temperature above which the two configurations always meet. Using finite size scaling ideas, we determine for these three models this dynamical phase transition and some of the critical exponents. For the ferromagnet, the transition T/sub c/ approx. = 2.25 coincides with the Curie temperature whereas for the two spin glass models +- J distribution of bonds) we obtain T/sub c/ approx. = 1.5-1.7.

  10. Willingness to pay and size of health benefit: an integrated model to test for 'sensitivity to scale'.

    Science.gov (United States)

    Yeung, Raymond Y T; Smith, Richard D; McGhee, Sarah M

    2003-09-01

    A key theoretical prediction concerning willingness to pay is that it is positively correlated with benefit size and is assessed by testing the 'sensitivity to scale (scope)'. 'External' (between-sample) sensitivity tests are usually regarded as less powerful than 'internal' (within-subject) tests. However, the latter may suffer from 'anchoring' effects. This paper studies the statistical power of these tests by questioning the distributional assumption of empirical data. We present an integrated model to capture both internal and external variations, while controlling for sample heterogeneity, applied to data from a survey estimating the value of reducing symptom-days. Results indicate that once data is properly transformed, WTP becomes 'scale sensitive' and consistent with diminishing marginal utility theory.

  11. An LES study of pollen dispersal from isolated populations: Effects of source size and boundary-layer scaling

    Science.gov (United States)

    Chamecki, Marcelo; Meneveau, Charles; Parlange, Marc B.

    2008-11-01

    A framework to simulate pollen dispersal in the atmospheric boundary layer based on the large eddy simulation technique is developed. Pollen is represented by a continuum concentration field and is evolved following an advection-diffusion equation including a gravitational settling term. The approach is validated against classical data on point-source releases and our own field data for a natural ragweed field. The LES is further used as a tool to investigate the effect of source size on the patterns of pollen ground deposition, an issue of fundamental importance in the development of policies for genetically modified crops. The cross-wind integrated deposition is shown to scale with the pollen boundary-layer height at the trailing edge of the field and a simple practical expression based on the development of the pollen boundary layer is proposed to scale results from small test fields to realistic agricultural conditions.

  12. Patterns and Pathways of Evolving Catchment Response in a Medium-Sized Mediterranean Catchment on a Millennium Scale

    Science.gov (United States)

    van Beek, L. P. H.; Bierkens, M. F. P.

    2012-04-01

    The meso-scale landscape dynamics model, CALEROS, has been developed to simulate the interactions between climate, soil production and erosion, vegetation and land use on geomorphological to human time scales in Mediterranean environments. Starting from an initial landscape consisting of a DTM, soil distribution and underlying lithology, the landscape is free to develop in response to the imposed climate variability and seismicity. In addition to changes in soil distribution and bedrock lowering, this includes the establishment of vegetation as conditioned by a selection of plant functional types and, optionally, population and land use dynamics as conditioned by land use scenarios specifying technological and dietary constraints for different periods. As such CALEROS is well-suited to investigate the relative impacts of climate, land cover and human activities on the hydrological catchment response and the associated sediment fluxes due to soil erosion and mass movements. Here we use CALEROS to i) investigate the redistribution of water and sediment across the landscape in a medium-sized Mediterranean catchment (Contrada Maddalena; ~14km2, Calabria, Italy) and ii) to establish patterns of co-evolution in soil properties and vegetation under pristine and anthropogenically impacted conditions on a millennium-scale. Using summary statistics to describe the emergent properties and to verify them against observations, we then delineate areas of uniform morphology and describe the various pathways of development. This information allows us to identify elements of consistent hydrological response and the associated transfer of material across different scales. It also provides essential information on essential feedbacks and the resulting convergence or divergence in landscape development under the impact of climatic or seismic events or human intervention. Although the results are evidently conditioned by the physiographic setting of the study area and by the

  13. A metabolic and body-size scaling framework for parasite within-host abundance, biomass, and energy flux.

    Science.gov (United States)

    Hechinger, Ryan F

    2013-08-01

    Energetics may provide a useful currency for studying the ecology of parasite assemblages within individual hosts. Parasite assemblages may also provide powerful models to study general principles of ecological energetics. Yet there has been little ecological research on parasite-host energetics, probably due to methodological difficulties. However, the scaling relationships of individual metabolic rate with body or cell size and temperature may permit us to tackle the energetics of parasite assemblages in hosts. This article offers the foundations and initial testing of a metabolic theory of ecology (MTE) framework for parasites in hosts. I first provide equations to estimate energetic flux through observed parasite assemblages. I then develop metabolic scaling theory for parasite abundance, energetics, and biomass in individual hosts. In contrast to previous efforts, the theory factors in both host and parasite metabolic scaling, how parasites use host space, and whether energy or space dictates carrying capacity. Empirical tests indicate that host energetic flux can set parasite carrying capacity, which decreases as predicted considering the scaling of host and parasite metabolic rates. The theory and results also highlight that the phenomenon of "energetic equivalence" is not an assumption of MTE but a possible outcome contingent on how species partition resources. Hence, applying MTE to parasites can lend mechanistic, quantitative, predictive insight into the nature of parasitism and can inform general ecological theory.

  14. Minimum spanning tree filtering of correlations for varying time scales and size of fluctuations

    CERN Document Server

    Kwapien, Jaroslaw; Forczek, Marcin; Drozdz, Stanislaw

    2016-01-01

    Based on a recently proposed $q$-dependent detrended cross-correlation coefficient $\\rho_q$ (J.~Kwapie\\'n, P.~O\\'swi\\k{e}cimka, S.~Dro\\.zd\\.z, Phys. Rev.~E 92, 052815 (2015)), we introduce a family of $q$-dependent minimum spanning trees ($q$MST) that are selective to cross-correlations between different fluctuation amplitudes and different time scales of multivariate data. They inherit this ability directly from the coefficients $\\rho_q$ that are processed here to construct a distance matrix being the input to the MST-constructing Kruskal's algorithm. In order to illustrate their performance, we apply the $q$MSTs to sample empirical data from the American stock market and discuss the results. We show that the $q$MST graphs can complement $\\rho_q$ in detection of "hidden" correlations that cannot be observed by the MST graphs based on $\\rho_{\\rm DCCA}$ and, therefore, they can be useful in many areas where the multivariate cross-correlations are of interest (e.g., in portfolio analysis).

  15. Extended scaling and Paschen law for micro-sized radiofrequency plasma breakdown

    Science.gov (United States)

    Lee, Min Uk; Lee, Jimo; Lee, Jae Koo; Yun, Gunsu S.

    2017-03-01

    The single particle motion analysis and particle-in-cell merged with Monte Carlo collision (PIC/MCC) simulations are compared to explain substantial breakdown voltage reduction for helium microwave discharge above a critical frequency corresponding to the transition from the drift-dominant to the diffusion-dominant electron loss regime. The single particle analysis suggests that the transition frequency is proportional to the product of {p}-{m} and {d}-({m+1)} where p is the neutral gas pressure, d is the gap distance, and m is a numerical parameter, which is confirmed by the PIC simulation. In the low-frequency or drift-dominant regime, i.e., γ - {{r}}{{e}}{{g}}{{i}}{{m}}{{e}}, the secondary electron emission induced by ion drift motion is the key parameter for determining the breakdown voltage. The fluid analysis including the secondary emission coefficient, γ , induces the extended Paschen law that implies the breakdown voltage is determined by pd, f/p, γ , and d/R where f is the frequency of the radio or microwave frequency source, and R is the diameter of electrode. The extended Paschen law reproduces the same scaling law for the transition frequency and is confirmed by the independent PIC and fluid simulations.

  16. Neutral and ionized gas around the post-Red Supergiant IRC+10420 at au size scales

    CERN Document Server

    Oudmaijer, Rene

    2012-01-01

    IRC +10420 is one of the few known massive stars in rapid transition from the Red Supergiant phase to the Wolf-Rayet or Luminous Blue Variable phase. The star has an ionised wind and using the Br gamma hydrogen recombination emission we assess the mass-loss on spatial scales of order 1 au. We present new VLT Interferometer AMBER data which are combined with all other AMBER data in the literature. The final dataset covers a position angle range of 180 degrees and baselines up to 110 meters. The spectrally dispersed visibilities, differential phases and line flux are conjointly analyzed and modelled. We also present AMBER/FINITO observations which cover a larger wavelength range and allow us to observe the Na I doublet at 2.2 micron. The data are complemented by X-Shooter data, which provide a higher spectral resolution view. The Brackett gamma line and the Na I doublet are both spatially resolved. After correcting the AMBER data for the fact that the lines are not spectrally resolved, we find that Br gamma tra...

  17. Scaling of TNSA-accelerated proton beams with laser energy and focal spot size

    Energy Technology Data Exchange (ETDEWEB)

    Obst, Lieselotte; Metzkes, Josefine; Schramm, Ulrich [Helmholtz-Zentrum Dresden - Rossendorf, Dresden (Germany); Technische Universitaet Dresden, Dresden (Germany); Zeil, Karl; Kraft, Stephan [Helmholtz-Zentrum Dresden - Rossendorf, Dresden (Germany)

    2014-07-01

    We investigate the acceleration of high energy proton pulses generated by relativistic laser-plasma interaction. The scope of this work was the systematic investigation of the scaling of the laser proton acceleration process in the ultra-short pulse regime in order to identify feasible routes towards the potential medical application of this accelerator technology for the development of compact proton sources for radiation therapy. We present an experimental study of the proton beam properties under variation of the laser intensity irradiating thin foil targets. This was achieved by employing different parabolic mirrors with various focal lengths. Hence, in contrast to moving the target in and out of focus, the target was always irradiated with an optimized focal spot. By observing the back reflected light of the laser beam from the target front side, pre-plasma effects on the laser absorption could be investigated. The study was performed at the 150 TW Draco Laser facility of the Helmholtz-Zentrum Dresden-Rossendorf with ultrashort (30 fs) laser pulses of intensities of about 8 . 10{sup 20} W/cm{sup 2}.

  18. Finite-size corrections to scaling of the magnetization distribution in the two-dimensional XY model at zero temperature.

    Science.gov (United States)

    Palma, G; Niedermayer, F; Rácz, Z; Riveros, A; Zambrano, D

    2016-08-01

    The zero-temperature, classical XY model on an L×L square lattice is studied by exploring the distribution Φ_{L}(y) of its centered and normalized magnetization y in the large-L limit. An integral representation of the cumulant generating function, known from earlier works, is used for the numerical evaluation of Φ_{L}(y), and the limit distribution Φ_{L→∞}(y)=Φ_{0}(y) is obtained with high precision. The two leading finite-size corrections Φ_{L}(y)-Φ_{0}(y)≈a_{1}(L)Φ_{1}(y)+a_{2}(L)Φ_{2}(y) are also extracted both from numerics and from analytic calculations. We find that the amplitude a_{1}(L) scales as ln(L/L_{0})/L^{2} and the shape correction function Φ_{1}(y) can be expressed through the low-order derivatives of the limit distribution, Φ_{1}(y)=[yΦ_{0}(y)+Φ_{0}^{'}(y)]^{'}. Thus, Φ_{1}(y) carries the same universal features as the limit distribution and can be used for consistency checks of universality claims based on finite-size systems. The second finite-size correction has an amplitude a_{2}(L)∝1/L^{2} and one finds that a_{2}Φ_{2}(y)≪a_{1}Φ_{1}(y) already for small system size (L>10). We illustrate the feasibility of observing the calculated finite-size corrections by performing simulations of the XY model at low temperatures, including T=0.

  19. Picturing the Size and Site of Stroke With an Expanded National Institutes of Health Stroke Scale.

    Science.gov (United States)

    Agis, Daniel; Goggins, Maria B; Oishi, Kumiko; Oishi, Kenichi; Davis, Cameron; Wright, Amy; Kim, Eun Hye; Sebastian, Rajani; Tippett, Donna C; Faria, Andreia; Hillis, Argye E

    2016-06-01

    The National Institutes of Health Stroke Scale (NIHSS) includes minimal assessment of cognitive function, particularly in right hemisphere (RH) stroke. Descriptions of the Cookie Theft picture from the NIHSS allow analyses that (1) correlate with aphasia severity and (2) identify communication deficits in RH stroke. We hypothesized that analysis of the picture description contributes valuable information about volume and location of acute stroke. We evaluated 67 patients with acute ischemic stroke (34 left hemisphere [LH]; 33 RH) with the NIHSS, analysis of the Cookie Theft picture, and magnetic resonance imaging, compared with 35 sex- and age-matched controls. We evaluated descriptions for total content units (CU), syllables, ratio of left:right CU, CU/minute, and percent interpretive CU, based on previous studies. Lesion volume and percent damage to regions of interest were measured on diffusion-weighted imaging. Multivariable linear regression identified variables associated with infarct volume, independently of NIHSS score, age and sex. Patients with RH and LH stroke differed from controls, but not from each other, on CU, syllables/CU, and CU/minute. Left:right CU was lower in RH compared with LH stroke. CU, syllables/CU, and NIHSS each correlated with lesion volume in LH and RH stroke. Lesion volume was best accounted by a model that included CU, syllables/CU, NIHSS, left:right CU, percent interpretive CU, and age, in LH and RH stroke. Each discourse variable and NIHSS score were associated with percent damage to different regions of interest, independently of lesion volume and age. Brief picture description analysis complements NIHSS scores in predicting stroke volume and location. © 2016 The Authors.

  20. Observation of chorus waves by the Van Allen Probes: Dependence on solar wind parameters and scale size

    Science.gov (United States)

    Aryan, Homayon; Sibeck, David; Balikhin, Michael; Agapitov, Oleksiy; Kletzing, Craig

    2016-08-01

    Highly energetic electrons in the Earth's Van Allen radiation belts can cause serious damage to spacecraft electronic systems and affect the atmospheric composition if they precipitate into the upper atmosphere. Whistler mode chorus waves have attracted significant attention in recent decades for their crucial role in the acceleration and loss of energetic electrons that ultimately change the dynamics of the radiation belts. The distribution of these waves in the inner magnetosphere is commonly presented as a function of geomagnetic activity. However, geomagnetic indices are nonspecific parameters that are compiled from imperfectly covered ground based measurements. The present study uses wave data from the two Van Allen Probes to present the distribution of lower band chorus waves not only as functions of single geomagnetic index and solar wind parameters but also as functions of combined parameters. Also the current study takes advantage of the unique equatorial orbit of the Van Allen Probes to estimate the average scale size of chorus wave packets, during close separations between the two spacecraft, as a function of radial distance, magnetic latitude, and geomagnetic activity, respectively. Results show that the average scale size of chorus wave packets is approximately 1300-2300 km. The results also show that the inclusion of combined parameters can provide better representation of the chorus wave distributions in the inner magnetosphere and therefore can further improve our knowledge of the acceleration and loss of radiation belt electrons.

  1. The finite-size scaling study of four-dimensional Ising model in the presence of external magnetic field

    Science.gov (United States)

    Merdan, Ziya; Kürkçü, Cihan; Öztürk, Mustafa K.

    2014-12-01

    The four-dimensional ferromagnetic Ising model in external magnetic field is simulated on the Creutz cellular automaton algorithm using finite-size lattices with linear dimension 4 ≤ L ≤ 8. The critical temperature value of infinite lattice, Tc χ ( ∞ ) = 6 , 680 (1) obtained for h = 0 agrees well with the values T c ( ∞ ) ≈ 6.68 obtained previously using different methods. Moreover, h = 0.00025 in our work also agrees with all the results obtained from h = 0 in the literature. However, there are no works for h ≠ 0 in the literature. The value of the field critical exponent (δ = 3.0136(3)) is in good agreement with δ = 3 which is obtained from scaling law of Widom. In spite of the finite-size scaling relations of | M L ( t ) | and χ L ( t ) for 0 ≤ h ≤ 0.001 are verified; however, in the cases of 0.0025 ≤ h ≤ 0.1 they are not verified.

  2. Bioresorbable scaffolds for bone tissue engineering: optimal design, fabrication, mechanical testing and scale-size effects analysis.

    Science.gov (United States)

    Coelho, Pedro G; Hollister, Scott J; Flanagan, Colleen L; Fernandes, Paulo R

    2015-03-01

    Bone scaffolds for tissue regeneration require an optimal trade-off between biological and mechanical criteria. Optimal designs may be obtained using topology optimization (homogenization approach) and prototypes produced using additive manufacturing techniques. However, the process from design to manufacture remains a research challenge and will be a requirement of FDA design controls to engineering scaffolds. This work investigates how the design to manufacture chain affects the reproducibility of complex optimized design characteristics in the manufactured product. The design and prototypes are analyzed taking into account the computational assumptions and the final mechanical properties determined through mechanical tests. The scaffold is an assembly of unit-cells, and thus scale size effects on the mechanical response considering finite periodicity are investigated and compared with the predictions from the homogenization method which assumes in the limit infinitely repeated unit cells. Results show that a limited number of unit-cells (3-5 repeated on a side) introduce some scale-effects but the discrepancies are below 10%. Higher discrepancies are found when comparing the experimental data to numerical simulations due to differences between the manufactured and designed scaffold feature shapes and sizes as well as micro-porosities introduced by the manufacturing process. However good regression correlations (R(2) > 0.85) were found between numerical and experimental values, with slopes close to 1 for 2 out of 3 designs.

  3. Main transition in the Pink membrane model: finite-size scaling and the influence of surface roughness.

    Science.gov (United States)

    Sadeghi, Sina; Vink, R L C

    2012-06-01

    We consider the main transition in single-component membranes using computer simulations of the Pink model [D. A. Pink et al., Biochemistry 19, 349 (1980)]. We first show that the accepted parameters of the Pink model yield a main transition temperature that is systematically below experimental values. This resolves an issue that was first pointed out by Corvera and co-workers [Phys. Rev. E 47, 696 (1993)]. In order to yield the correct transition temperature, the strength of the van der Waals coupling in the Pink model must be increased; by using finite-size scaling, a set of optimal values is proposed. We also provide finite-size scaling evidence that the Pink model belongs to the universality class of the two-dimensional Ising model. This finding holds irrespective of the number of conformational states. Finally, we address the main transition in the presence of quenched disorder, which may arise in situations where the membrane is deposited on a rough support. In this case, we observe a stable multidomain structure of gel and fluid domains, and the absence of a sharp transition in the thermodynamic limit.

  4. Simple machines

    CERN Document Server

    Graybill, George

    2007-01-01

    Just how simple are simple machines? With our ready-to-use resource, they are simple to teach and easy to learn! Chocked full of information and activities, we begin with a look at force, motion and work, and examples of simple machines in daily life are given. With this background, we move on to different kinds of simple machines including: Levers, Inclined Planes, Wedges, Screws, Pulleys, and Wheels and Axles. An exploration of some compound machines follows, such as the can opener. Our resource is a real time-saver as all the reading passages, student activities are provided. Presented in s

  5. Refrigerating machine oil

    Energy Technology Data Exchange (ETDEWEB)

    Nozawa, K.

    1981-03-17

    Refrigerating machine oil to be filled in a sealed motorcompressor unit constituting a refrigerating cycle system including an electric refrigerator, an electric cold-storage box, a small-scaled electric refrigerating show-case, a small-scaled electric cold-storage show-case and the like, is arranged to have a specifically enhanced property, in which smaller initial driving power consumption of the sealed motor-compressor and easier supply of the predetermined amount of the refrigerating machine oil to the refrigerating system are both guaranteed even in a rather low environmental temperature condition.

  6. Minimum spanning tree filtering of correlations for varying time scales and size of fluctuations

    Science.gov (United States)

    Kwapień, Jarosław; Oświecimka, Paweł; Forczek, Marcin; DroŻdŻ, Stanisław

    2017-05-01

    Based on a recently proposed q -dependent detrended cross-correlation coefficient, ρq [J. Kwapień, P. Oświęcimka, and S. Drożdż, Phys. Rev. E 92, 052815 (2015), 10.1103/PhysRevE.92.052815], we generalize the concept of the minimum spanning tree (MST) by introducing a family of q -dependent minimum spanning trees (q MST s ) that are selective to cross-correlations between different fluctuation amplitudes and different time scales of multivariate data. They inherit this ability directly from the coefficients ρq, which are processed here to construct a distance matrix being the input to the MST-constructing Kruskal's algorithm. The conventional MST with detrending corresponds in this context to q =2 . In order to illustrate their performance, we apply the q MSTs to sample empirical data from the American stock market and discuss the results. We show that the q MST graphs can complement ρq in disentangling "hidden" correlations that cannot be observed in the MST graphs based on ρDCCA, and therefore, they can be useful in many areas where the multivariate cross-correlations are of interest. As an example, we apply this method to empirical data from the stock market and show that by constructing the q MSTs for a spectrum of q values we obtain more information about the correlation structure of the data than by using q =2 only. More specifically, we show that two sets of signals that differ from each other statistically can give comparable trees for q =2 , while only by using the trees for q ≠2 do we become able to distinguish between these sets. We also show that a family of q MSTs for a range of q expresses the diversity of correlations in a manner resembling the multifractal analysis, where one computes a spectrum of the generalized fractal dimensions, the generalized Hurst exponents, or the multifractal singularity spectra: the more diverse the correlations are, the more variable the tree topology is for different q 's. As regards the correlation structure

  7. Multi-Scale Particle Size Distributions of Mars, Moon and Itokawa based on a time-maturation dependent fragmentation model

    Science.gov (United States)

    Charalambous, C. A.; Pike, W. T.

    2013-12-01

    We present the development of a soil evolution framework and multiscale modelling of the surface of Mars, Moon and Itokawa thus providing an atlas of extra-terrestrial Particle Size Distributions (PSD). These PSDs are profoundly based on a tailoring method which interconnects several datasets from different sites captured by the various missions. The final integrated product is then fully justified through a soil evolution analysis model mathematically constructed via fundamental physical principles (Charalambous, 2013). The construction of the PSD takes into account the macroscale fresh primary impacts and their products, the mesoscale distributions obtained by the in-situ data of surface missions (Golombek et al., 1997, 2012) and finally the microscopic scale distributions provided by Curiosity and Phoenix Lander (Pike, 2011). The distribution naturally extends at the magnitudinal scales at which current data does not exist due to the lack of scientific instruments capturing the populations at these data absent scales. The extension is based on the model distribution (Charalambous, 2013) which takes as parameters known values of material specific probabilities of fragmentation and grinding limits. Additionally, the establishment of a closed-form statistical distribution provides a quantitative description of the soil's structure. Consequently, reverse engineering of the model distribution allows the synthesis of soil that faithfully represents the particle population at the studied sites (Charalambous, 2011). Such representation essentially delivers a virtual soil environment to work with for numerous applications. A specific application demonstrated here will be the information that can directly be extracted for the successful drilling probability as a function of distance in an effort to aid the HP3 instrument of the 2016 Insight Mission to Mars. Pike, W. T., et al. "Quantification of the dry history of the Martian soil inferred from in situ microscopy

  8. Disk galaxy scaling relations at intermediate redshifts. I. The Tully-Fisher and velocity-size relations

    Science.gov (United States)

    Böhm, Asmus; Ziegler, Bodo L.

    2016-07-01

    Aims: Galaxy scaling relations such as the Tully-Fisher relation (between the maximum rotation velocity Vmax and luminosity) and the velocity-size relation (between Vmax and the disk scale length) are powerful tools to quantify the evolution of disk galaxies with cosmic time. Methods: We took spatially resolved slit spectra of 261 field disk galaxies at redshifts up to z ≈ 1 using the FORS instruments of the ESO Very Large Telescope. The targets were selected from the FORS Deep Field and William Herschel Deep Field. Our spectroscopy was complemented with HST/ACS imaging in the F814W filter. We analyzed the ionized gas kinematics by extracting rotation curves from the two-dimensional spectra. Taking into account all geometrical, observational, and instrumental effects, these rotation curves were used to derive the intrinsic Vmax. Results: Neglecting galaxies with disturbed kinematics or insufficient spatial rotation curve extent, Vmax was reliably determined for 124 galaxies covering redshifts 0.05 gas and/or small satellites. From scrutinizing the combined evolution in luminosity and size, we find that the galaxies that show the strongest evolution toward smaller sizes at z ≈ 1 are not those that feature the strongest evolution in luminosity, and vice versa. Based on observations with the European Southern Observatory Very Large Telescope (ESO-VLT), observing run IDs 65.O-0049, 66.A-0547, 68.A-0013, 69.B-0278B, 70.B-0251A and 081.B-0107A.The full Table 1 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/592/A64

  9. Finite-size scaling in Ising-like systems with quenched random fields: evidence of hyperscaling violation.

    Science.gov (United States)

    Vink, R L C; Fischer, T; Binder, K

    2010-11-01

    In systems belonging to the universality class of the random field Ising model, the standard hyperscaling relation between critical exponents does not hold, but is replaced with a modified hyperscaling relation. As a result, standard formulations of finite-size scaling near critical points break down. In this work, the consequences of modified hyperscaling are analyzed in detail. The most striking outcome is that the free-energy cost ΔF of interface formation at the critical point is no longer a universal constant, but instead increases as a power law with system size, ΔF∝L(θ), with θ as the violation of hyperscaling critical exponent and L as the linear extension of the system. This modified behavior facilitates a number of numerical approaches that can be used to locate critical points in random field systems from finite-size simulation data. We test and confirm the approaches on two random field systems in three dimensions, namely, the random field Ising model and the demixing transition in the Widom-Rowlinson fluid with quenched obstacles.

  10. Finite-Size Scaling Analysis of the Conductivity of Dirac Electrons on a Surface of Disordered Topological Insulators

    Science.gov (United States)

    Takane, Yositake

    2016-09-01

    Two-dimensional (2D) massless Dirac electrons appear on a surface of three-dimensional topological insulators. The conductivity of such a 2D Dirac electron system is studied for strong topological insulators in the case of the Fermi level being located at the Dirac point. The average conductivity is numerically calculated for a system of length L and width W under the periodic or antiperiodic boundary condition in the transverse direction, and its behavior is analyzed by applying a finite-size scaling approach. It is shown that is minimized at the clean limit, where it becomes scale-invariant and depends only on L/W and the boundary condition. It is also shown that once disorder is introduced, monotonically increases with increasing L. Hence, the system becomes a perfect metal in the limit of L → ∞ except at the clean limit, which should be identified as an unstable fixed point. Although the scaling curve of strongly depends on L/W and the boundary condition near the unstable fixed point, it becomes almost independent of them with increasing , implying that it asymptotically obeys a universal law.

  11. Computational and Experimental Study of the Transient Transport Phenomena in a Full-Scale Twin-Roll Continuous Casting Machine

    Science.gov (United States)

    Xu, Mianguang; Li, Zhongyang; Wang, Zhaohui; Zhu, Miaoyong

    2017-02-01

    To gain a fundamental understanding of the transient fluid flow in twin-roll continuous casting, the current paper applies both large eddy simulation (LES) and full-scale water modeling experiments to investigate the characteristics of the top free surface, stirring effect of the roll rotation, boundary layer fluctuations, and backflow stability. The results show that, the characteristics of the top free surface and the flow field in the wedge-shaped pool region are quite different with/without the consideration of the roll rotation. The roll rotation decreases the instantaneous fluctuation range of the top free surface, but increases its horizontal velocity. The stirring effect of the roll rotating makes the flow field more homogenous and there exists clear shear flow on the rotating roll surface. The vortex shedding induced by the Kármán Vortex Street from the submerged entry nozzle (SEN) causes the "velocity magnitude wave" and strongly influences the boundary layer stability and the backflow stability. The boundary layer fluctuations or the "velocity magnitude wave" induced by the vortex shedding could give rise to the internal porosity. In strip continuous casting process, the vortex shedding phenomenon indicates that the laminar flow can give rise to instability and that it should be made important in the design of the feeding system and the setting of the operating parameters.

  12. Pore-scale simulations of drainage in granular materials: Finite size effects and the representative elementary volume

    Science.gov (United States)

    Yuan, Chao; Chareyre, Bruno; Darve, Félix

    2016-09-01

    A pore-scale model is introduced for two-phase flow in dense packings of polydisperse spheres. The model is developed as a component of a more general hydromechanical coupling framework based on the discrete element method, which will be elaborated in future papers and will apply to various processes of interest in soil science, in geomechanics and in oil and gas production. Here the emphasis is on the generation of a network of pores mapping the void space between spherical grains, and the definition of local criteria governing the primary drainage process. The pore space is decomposed by Regular Triangulation, from which a set of pores connected by throats are identified. A local entry capillary pressure is evaluated for each throat, based on the balance of capillary pressure and surface tension at equilibrium. The model reflects the possible entrapment of disconnected patches of the receding wetting phase. It is validated by a comparison with drainage experiments. In the last part of the paper, a series of simulations are reported to illustrate size and boundary effects, key questions when studying small samples made of spherical particles be it in simulations or experiments. Repeated tests on samples of different sizes give evolution of water content which are not only scattered but also strongly biased for small sample sizes. More than 20,000 spheres are needed to reduce the bias on saturation below 0.02. Additional statistics are generated by subsampling a large sample of 64,000 spheres. They suggest that the minimal sampling volume for evaluating saturation is one hundred times greater that the sampling volume needed for measuring porosity with the same accuracy. This requirement in terms of sample size induces a need for efficient computer codes. The method described herein has a low algorithmic complexity in order to satisfy this requirement. It will be well suited to further developments toward coupled flow-deformation problems in which evolution of the

  13. Electric machine

    Science.gov (United States)

    El-Refaie, Ayman Mohamed Fawzi [Niskayuna, NY; Reddy, Patel Bhageerath [Madison, WI

    2012-07-17

    An interior permanent magnet electric machine is disclosed. The interior permanent magnet electric machine comprises a rotor comprising a plurality of radially placed magnets each having a proximal end and a distal end, wherein each magnet comprises a plurality of magnetic segments and at least one magnetic segment towards the distal end comprises a high resistivity magnetic material.

  14. Cybernetic anthropomorphic machine systems

    Science.gov (United States)

    Gray, W. E.

    1974-01-01

    Functional descriptions are provided for a number of cybernetic man machine systems that augment the capacity of normal human beings in the areas of strength, reach or physical size, and environmental interaction, and that are also applicable to aiding the neurologically handicapped. Teleoperators, computer control, exoskeletal devices, quadruped vehicles, space maintenance systems, and communications equipment are considered.

  15. Scaled photographs of surf over the full range of breaker sizes on the north shore of Oahu and Jaws, Maui, Hawaiian Islands (NODC Accession 0001753)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Digital surf photographs were scaled using surfers as height benchmarks to estimate the size of the breakers. Historical databases for surf height in Hawaii are...

  16. MICRO/NANO-MACHINING ON SILICON SURFACE WITH A MODIFIED ATOMIC FORCE MICROSCOPE

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    To understand the deformation and removal mechanism of material on nano-scale and at ultralow loads, a systemic study on AFM micro/nano-machining on single crystal silicon is conducted. The results indicate that AFM nanomachining has a precisely dimensional controllability and a good surface quality on nanometer scale. A SEM is adopted to observe nano-machined region and chips, the results indicate that the material removal mechanisms change with the applied normal load. An XPS is used to analyze the changes of chemical composition inside and outside the nano-machined region respectively. The nano-indentation which is conducted with the same AFM diamond tip on the machined region shows a big discrepancy compared with that on the macro-scale. The calculated results show higher nano-hardness and elastic modulus than normal values. This phenomenon can be regarded as the indentation size effect(ISE).

  17. Re-fraction: a machine learning approach for deterministic identification of protein homologues and splice variants in large-scale MS-based proteomics.

    Science.gov (United States)

    Yang, Pengyi; Humphrey, Sean J; Fazakerley, Daniel J; Prior, Matthew J; Yang, Guang; James, David E; Yang, Jean Yee-Hwa

    2012-05-04

    A key step in the analysis of mass spectrometry (MS)-based proteomics data is the inference of proteins from identified peptide sequences. Here we describe Re-Fraction, a novel machine learning algorithm that enhances deterministic protein identification. Re-Fraction utilizes several protein physical properties to assign proteins to expected protein fractions that comprise large-scale MS-based proteomics data. This information is then used to appropriately assign peptides to specific proteins. This approach is sensitive, highly specific, and computationally efficient. We provide algorithms and source code for the current version of Re-Fraction, which accepts output tables from the MaxQuant environment. Nevertheless, the principles behind Re-Fraction can be applied to other protein identification pipelines where data are generated from samples fractionated at the protein level. We demonstrate the utility of this approach through reanalysis of data from a previously published study and generate lists of proteins deterministically identified by Re-Fraction that were previously only identified as members of a protein group. We find that this approach is particularly useful in resolving protein groups composed of splice variants and homologues, which are frequently expressed in a cell- or tissue-specific manner and may have important biological consequences.

  18. Functional Network Construction in Arabidopsis Using Rule-Based Machine Learning on Large-Scale Data Sets[C][W][OA

    Science.gov (United States)

    Bassel, George W.; Glaab, Enrico; Marquez, Julietta; Holdsworth, Michael J.; Bacardit, Jaume

    2011-01-01

    The meta-analysis of large-scale postgenomics data sets within public databases promises to provide important novel biological knowledge. Statistical approaches including correlation analyses in coexpression studies of gene expression have emerged as tools to elucidate gene function using these data sets. Here, we present a powerful and novel alternative methodology to computationally identify functional relationships between genes from microarray data sets using rule-based machine learning. This approach, termed “coprediction,” is based on the collective ability of groups of genes co-occurring within rules to accurately predict the developmental outcome of a biological system. We demonstrate the utility of coprediction as a powerful analytical tool using publicly available microarray data generated exclusively from Arabidopsis thaliana seeds to compute a functional gene interaction network, termed Seed Co-Prediction Network (SCoPNet). SCoPNet predicts functional associations between genes acting in the same developmental and signal transduction pathways irrespective of the similarity in their respective gene expression patterns. Using SCoPNet, we identified four novel regulators of seed germination (ALTERED SEED GERMINATION5, 6, 7, and 8), and predicted interactions at the level of transcript abundance between these novel and previously described factors influencing Arabidopsis seed germination. An online Web tool to query SCoPNet has been developed as a community resource to dissect seed biology and is available at http://www.vseed.nottingham.ac.uk/. PMID:21896882

  19. Using factor analysis scales of generalized amino acid information for prediction and characteristic analysis of β-turns in proteins based on a support vector machine model

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    This paper offers a new combined approach to predict and characterize β-turns in proteins.The approach includes two key steps,i.e.,how to represent the features of β-turns and how to develop a predictor.The first step is to use factor analysis scales of generalized amino acid information(FASGAI),involving hydrophobicity,alpha and turn propensities,bulky properties,compositional characteristics,local flexibility and electronic properties,to represent the features of β-turns in proteins.The second step is to construct a support vector machine(SVM) predictor of β-turns based on 426 training proteins by a sevenfold cross validation test.The SVM predictor thus predicted β-turns on 547 and 823 proteins by an external validation test,separately.Our results are compared with the previously best known β-turn prediction methods and are shown to give comparative performance.Most significantly,the SVM model provides some information related to β-turn residues in proteins.The results demonstrate that the present combination approach may be used in the prediction of protein structures.

  20. Spectral statistics, finite-size scaling and multifractal analysis of quasiperiodic chain with p-wave pairing

    Science.gov (United States)

    Wang, Yucheng; Wang, Yancheng; Chen, Shu

    2016-11-01

    We study the spectral and wavefunction properties of a one-dimensional incommensurate system with p-wave pairing and unveil that the system demonstrates a series of particular properties in its ciritical region. By studying the spectral statistics, we show that the bandwidth distribution and level spacing distribution in the critical region follow inverse power laws, which however break down in the extended and localized regions. By performing a finite-size scaling analysis, we can obtain some critical exponents of the system and find these exponents fulfilling a hyperscaling law in the whole critical region. We also carry out a multifractal analysis on system's wavefuntions by using a box-counting method and unveil the wavefuntions displaying different behaviors in the critical, extended and localized regions.

  1. Langmuir monolayers of a hydrogenated/fluorinated catanionic surfactant: from the macroscopic to the nanoscopic size scale.

    Science.gov (United States)

    Blanco, Elena; Piñeiro, Angel; Miller, Reinhard; Ruso, Juan M; Prieto, Gerardo; Sarmiento, Félix

    2009-07-21

    Langmuir monolayers of the hydrogenated/fluorinated catanionic surfactant cetyltrimethylammonium perfluorooctanoate at the air/water interface are studied at room temperature. Excess Gibbs energies of mixing, DeltaG(E), as well as transition areas and pressures, were obtained from the surface pressure-area isotherm. The DeltaG(E) curve indicates that tail-tail interactions are more important than head-head interactions at low pressures and vice versa. Atomic force microscopy and molecular dynamics simulations allowed a fine characterization of the monolayer structure as a function of the area per molecule at mesoscopic and nanoscopic size scales, respectively. A combined analysis of the techniques allow us to conclude that electrostatic interactions between the ionic head groups are dominant in the monolayer while hydrophobic parts are of secondary importance. Overall, results obtained from the different techniques complement to each other, giving a comprehensive characterization of the monolayer.

  2. An optimum city size? The scaling relationship for urban population and fine particulate (PM(2.5)) concentration.

    Science.gov (United States)

    Han, Lijian; Zhou, Weiqi; Pickett, Steward T A; Li, Weifeng; Li, Li

    2016-01-01

    We utilize the distribution of PM2.5 concentration and population in large cities at the global scale to illustrate the relationship between urbanization and urban air quality. We found: 1) The relationship varies greatly among continents and countries. Large cities in North America, Europe, and Latin America have better air quality than those in other continents, while those in China and India have the worst air quality. 2) The relationships between urban population size and PM2.5 concentration in large cities of different continents or countries were different. PM2.5 concentration in large cities in North America, Europe, and Latin America showed little fluctuation or a small increasing trend, but those in Africa and India represent a "U" type relationship and in China represent an inverse "U" type relationship. 3) The potential contribution of population to PM2.5 concentration was higher in the large cities in China and India, but lower in other large cities.

  3. Turing Automata and Graph Machines

    Directory of Open Access Journals (Sweden)

    Miklós Bartha

    2010-06-01

    Full Text Available Indexed monoidal algebras are introduced as an equivalent structure for self-dual compact closed categories, and a coherence theorem is proved for the category of such algebras. Turing automata and Turing graph machines are defined by generalizing the classical Turing machine concept, so that the collection of such machines becomes an indexed monoidal algebra. On the analogy of the von Neumann data-flow computer architecture, Turing graph machines are proposed as potentially reversible low-level universal computational devices, and a truly reversible molecular size hardware model is presented as an example.

  4. Determining organic carbon distributions in soil particle size fractions as a precondition of lateral carbon transport modeling at large scales

    Science.gov (United States)

    Schindewolf, Marcus; Seher, Wiebke; Pfeffer, Eduard; Schultze, Nico; Amorim, Ricardo S. S.; Schmidt, Jürgen

    2016-04-01

    The erosional transport of organic carbon has an effect on the global carbon budget, however, it is uncertain, whether erosion is a sink or a source for carbon in the atmosphere. Continuous erosion leads to a massive loss of top soils including the loss of organic carbon historically accumulated in the soil humus fraction. The colluvial organic carbon could be protected from further degradation depending on the depth of the colluvial cover and local decomposing conditions. Another part of eroded soils and organic carbon will enter surface water bodies and might be transported over long distances. The selective nature of soil erosion results in a preferential transport of fine particles while less carbonic larger particles remain on site. Consequently organic carbon is enriched in the eroded sediment compared to the origin soil. As a precondition of process based lateral carbon flux modeling, carbon distribution on soil particle size fractions has to be known. In this regard the present study refers to the determination of organic carbon contents on soil particle size separates by a combined sieve-sedimentation method for different tropical and temperate soils Our results suggest high influences of parent material and climatic conditions on carbon distribution on soil particle separates. By applying these results in erosion modeling a test slope was simulated with the EROSION 2D simulation software covering certain land use and soil management scenarios referring to different rainfall events. These simulations allow first insights on carbon loss and depletion on sediment delivery areas as well as carbon gains and enrichments on deposition areas on the landscape scale and could be used as a step forward in landscape scaled carbon redistribution modeling.

  5. Sparse extreme learning machine for classification.

    Science.gov (United States)

    Bai, Zuo; Huang, Guang-Bin; Wang, Danwei; Wang, Han; Westover, M Brandon

    2014-10-01

    Extreme learning machine (ELM) was initially proposed for single-hidden-layer feedforward neural networks (SLFNs). In the hidden layer (feature mapping), nodes are randomly generated independently of training data. Furthermore, a unified ELM was proposed, providing a single framework to simplify and unify different learning methods, such as SLFNs, least square support vector machines, proximal support vector machines, and so on. However, the solution of unified ELM is dense, and thus, usually plenty of storage space and testing time are required for large-scale applications. In this paper, a sparse ELM is proposed as an alternative solution for classification, reducing storage space and testing time. In addition, unified ELM obtains the solution by matrix inversion, whose computational complexity is between quadratic and cubic with respect to the training size. It still requires plenty of training time for large-scale problems, even though it is much faster than many other traditional methods. In this paper, an efficient training algorithm is specifically developed for sparse ELM. The quadratic programming problem involved in sparse ELM is divided into a series of smallest possible sub-problems, each of which are solved analytically. Compared with SVM, sparse ELM obtains better generalization performance with much faster training speed. Compared with unified ELM, sparse ELM achieves similar generalization performance for binary classification applications, and when dealing with large-scale binary classification problems, sparse ELM realizes even faster training speed than unified ELM.

  6. The Machine within the Machine

    CERN Multimedia

    Katarina Anthony

    2014-01-01

    Although Virtual Machines are widespread across CERN, you probably won't have heard of them unless you work for an experiment. Virtual machines - known as VMs - allow you to create a separate machine within your own, allowing you to run Linux on your Mac, or Windows on your Linux - whatever combination you need.   Using a CERN Virtual Machine, a Linux analysis software runs on a Macbook. When it comes to LHC data, one of the primary issues collaborations face is the diversity of computing environments among collaborators spread across the world. What if an institute cannot run the analysis software because they use different operating systems? "That's where the CernVM project comes in," says Gerardo Ganis, PH-SFT staff member and leader of the CernVM project. "We were able to respond to experimentalists' concerns by providing a virtual machine package that could be used to run experiment software. This way, no matter what hardware they have ...

  7. Simulation of reaction diffusion processes over biologically relevant size and time scales using multi-GPU workstations.

    Science.gov (United States)

    Hallock, Michael J; Stone, John E; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida

    2014-05-01

    Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli. Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems.

  8. Interface localization-delocalization transition in a symmetric polymer blend: A finite-size scaling Monte Carlo study

    Science.gov (United States)

    Müller, M.; Binder, K.

    2001-02-01

    Using extensive Monte Carlo simulations, we study the phase diagram of a symmetric binary (AB) polymer blend confined into a thin film as a function of the film thickness D. The monomer-wall interactions are short ranged and antisymmetric, i.e., the left wall attracts the A component of the mixture with the same strength as the right wall does the B component, and this gives rise to a first order wetting transition in a semi-infinite geometry. The phase diagram and the crossover between different critical behaviors is explored. For large film thicknesses we find a first order interface localization-delocalization transition, and the phase diagram comprises two critical points, which are the finite film width analogies of the prewetting critical point. Using finite-size scaling techniques we locate these critical points, and present evidence of a two-dimensional Ising critical behavior. When we reduce the film width the two critical points approach the symmetry axis φ=1/2 of the phase diagram, and for D~2Rg we encounter a tricritical point. For an even smaller film thickness the interface localization-delocalization transition is second order, and we find a single critical point at φ=1/2. Measuring the probability distribution of the interface position, we determine the effective interaction between the wall and the interface. This effective interface potential depends on the lateral system size even away from the critical points. Its system size dependence stems from the large but finite correlation length of capillary waves. This finding gives direct evidence of a renormalization of the interface potential by capillary waves in the framework of a microscopic model.

  9. Interface localization-delocalization transition in a symmetric polymer blend: a finite-size scaling Monte Carlo study.

    Science.gov (United States)

    Müller, M; Binder, K

    2001-02-01

    Using extensive Monte Carlo simulations, we study the phase diagram of a symmetric binary (AB) polymer blend confined into a thin film as a function of the film thickness D. The monomer-wall interactions are short ranged and antisymmetric, i.e., the left wall attracts the A component of the mixture with the same strength as the right wall does the B component, and this gives rise to a first order wetting transition in a semi-infinite geometry. The phase diagram and the crossover between different critical behaviors is explored. For large film thicknesses we find a first order interface localization-delocalization transition, and the phase diagram comprises two critical points, which are the finite film width analogies of the prewetting critical point. Using finite-size scaling techniques we locate these critical points, and present evidence of a two-dimensional Ising critical behavior. When we reduce the film width the two critical points approach the symmetry axis straight phi=1/2 of the phase diagram, and for D approximately 2R(g) we encounter a tricritical point. For an even smaller film thickness the interface localization-delocalization transition is second order, and we find a single critical point at straight phi=1/2. Measuring the probability distribution of the interface position, we determine the effective interaction between the wall and the interface. This effective interface potential depends on the lateral system size even away from the critical points. Its system size dependence stems from the large but finite correlation length of capillary waves. This finding gives direct evidence of a renormalization of the interface potential by capillary waves in the framework of a microscopic model.

  10. Disk galaxy scaling relations at intermediate redshifts - I. The Tully-Fisher and velocity-size relations

    CERN Document Server

    Boehm, Asmus

    2015-01-01

    Galaxy scaling relations such as the Tully-Fisher relation (between maximum rotation velocity Vmax and luminosity) and the velocity-size relation (between Vmax and disk scale length) are powerful tools to quantify the evolution of disk galaxies with cosmic time. We took spatially resolved slit spectra of 261 field disk galaxies at redshifts up to z~1 using the FORS instruments of the ESO Very Large Telescope. The targets were selected from the FORS Deep Field and William Herschel Deep Field. Our spectroscopy was complemented with HST/ACS imaging in the F814W filter. We analyzed the ionized gas kinematics by extracting rotation curves from the 2-D spectra. Taking into account all geometrical, observational and instrumental effects, these rotation curves were used to derive the intrinsic Vmax. Neglecting galaxies with disturbed kinematics or insufficient spatial rotation curve extent, Vmax could be determined for 137 galaxies covering redshifts 0.05

  11. Scalable Electron Correlation Methods. 2. Parallel PNO-LMP2-F12 with Near Linear Scaling in the Molecular Size.

    Science.gov (United States)

    Ma, Qianli; Werner, Hans-Joachim

    2015-11-10

    We present an efficient explicitly correlated pair natural orbital local second-order Møller-Plesset perturbation theory (PNO-LMP2-F12) method. The method is an extension of our previously reported PNO-LMP2 approach [ Werner et al. J. Chem. Theory Comput. 2015 , 11 , 484 ]. Near linear scaling with the size of molecule is achieved by using domain approximations on both virtual and occupied orbitals, local density fitting (DF), and local resolution of the identity (RI), and by exploiting the sparsity of the local molecular orbitals (LMOs) as well as of projected atomic orbitals (PAOs). All large data structures used in the method are stored in distributed memory using Global Arrays (GAs) to achieve near inverse-linear scaling with the number of processing cores, provided that the GAs can be efficiently and independently accessed from all cores. The effect of the various domain approximations is tested for a wide range of chemical reactions. The PNO-LMP2-F12 reaction energies deviate from the canonical DF-MP2-F12 results by ≤1 kJ mol(-1) using triple-ζ (VTZ-F12) basis sets and are close to the complete basis set limits. PNO-LMP2-F12 calculations on molecules of chemical interest involving a few thousand basis functions can be performed within an hour or less using a few nodes on a small computer cluster.

  12. 基于灵敏度分析的XK719数控铣床尺寸优化∗%Size Optimization of XK719 CNC Milling Machine Based on Sensitivity Analysis

    Institute of Scientific and Technical Information of China (English)

    张疆平; 李想; 贾成阁; 赵希禄; 关英俊

    2016-01-01

    In order to improve static and dynamic characteristics of machine tools,hypermesh software was adopted to establish the finite element model of CNC milling machine with the XK719 CNC milling machine as the research object. On the basis of the harmonic response analysis for milling machine,it was concluded that the first and second natural frequency were low. Using the method of sensitivity analysis found the key dimensions. And the size optimzation was carried out where the key dimensions was defined as design varia-bles and the mass of the machine tool,the first natural frequency and the second natural frequancy were de-fined as responses. Finally,under the conditions of little change of milling machine’ s mass,this method made the first natural fraquency increased by 11. 5% and the second natural frequancy increased by 11. 3%. Re-sults showed that the design method provided reference for the design of the parts and components.%为了改善机床动静态特性,以XK719数控铣床为研究对象,采用Hypermesh软件建立数控铣床的有限元模型。在对铣床进行谐响应分析的基础上,得出机床1、2阶固有频率偏低的结论。运用灵敏度分析来寻找关键尺寸,并以关键尺寸为设计变量,以机床的质量和1、2阶固有频率为响应进行尺寸优化。最终,在铣床质量变化不大的情况下,使铣床的1阶固有频率提高了15.8%,2阶固有频率提高了11.3%。结果表明:该设计方法可以为机床零、部件的设计提供借鉴。

  13. Machine Learning

    CERN Document Server

    CERN. Geneva

    2017-01-01

    Machine learning, which builds on ideas in computer science, statistics, and optimization, focuses on developing algorithms to identify patterns and regularities in data, and using these learned patterns to make predictions on new observations. Boosted by its industrial and commercial applications, the field of machine learning is quickly evolving and expanding. Recent advances have seen great success in the realms of computer vision, natural language processing, and broadly in data science. Many of these techniques have already been applied in particle physics, for instance for particle identification, detector monitoring, and the optimization of computer resources. Modern machine learning approaches, such as deep learning, are only just beginning to be applied to the analysis of High Energy Physics data to approach more and more complex problems. These classes will review the framework behind machine learning and discuss recent developments in the field.

  14. Influence of aggregate sizes and microstructures on bioremediation assessment of field-contaminated soils in pilot-scale biopiles

    Science.gov (United States)

    Chang, W.; Akbari, A.; Frigon, D.; Ghoshal, S.

    2011-12-01

    Petroleum hydrocarbon contamination of soils and groundwater is an environmental concern. Bioremediation has been frequently considered a cost-effective, less disruptive remedial technology. Formation of soil aggregate fractions in unsaturated soils is generally believed to hinder aerobic hydrocarbon biodegradation due to the slow intra-pore diffusion of nutrients and oxygen within the aggregate matrix and to the reduced bioavailability of hydrocarbons. On the other hand, soil aggregates may harbour favourable niches for indigenous bacteria, providing protective microsites against various in situ environmental stresses. The size of the soil aggregates is likely to be a critical factor for these processes and could be interpreted as a relevant marker for biodegradation assessment. There have been only limited attempts in the past to assess petroleum hydrocarbon biodegradation in unsaturated soils as a function of aggregate size. This study is aimed at investigating the roles of aggregate sizes and aggregate microstructures on biodegradation activity. Field-aged, contaminated, clayey soils were shipped from Norman Wells, Canada. Attempts were made to stimulate indigenous microbial activity by soil aeration and nutrient amendments in a pilot-scale biopile tank (1m L×0.65m W×0.3 m H). A control biopile was maintained without the nutrient amendment but was aerated. The initial concentrations of petroleum hydrocarbons in the field-contaminated soils increased with increasing aggregate sizes, which were classified in three fractions: micro- (250-2000 μm) and macro-aggregates (>2000 μm). Compared to the TPH analyses at whole-soil level, the petroleum hydrocarbon analyses based on the aggregate-size levels demonstrated more clearly the extent of biodegradation of non-volatile, heavier hydrocarbons (C16-C34) in the soil. The removal of the C16-C34 hydrocarbons was 44% in macro-aggregates, but only 13% in meso-aggregates. The increased protein concentrations in macro

  15. Development of new shaped punch to predict scale-up issue in tableting process.

    Science.gov (United States)

    Aoki, Shigeru; Uchiyama, Jumpei; Ito, Manabu

    2014-01-01

    Scale-up issues in the tableting process, such as capping, sticking, or differences in tablet thickness, are often observed at the commercial production scale. A new shaped punch, named the size adjusted for scale-up (SAS) punch, was created to estimate scale-up issues seen between laboratory scale and commercial scale tableting processes. The SAS punch's head shape was designed to replicate the total compression time of a laboratory tableting machine to that of a commercial tableting machine. Three different lubricated blends were compressed into tablets using a laboratory tableting machine equipped with SAS punches, and any differences in tablet thickness or capping phenomenon were observed. It was found that the new shaped punch could be used to replicate scale-up issues observed in the commercial tableting machine. The SAS punch was shown to be a useful tool to estimate scale-up issues in the tableting process. © 2013 Wiley Periodicals, Inc. and the American Pharmacists Association.

  16. Electromechanical properties of 1D ZnO nanostructures: nanopiezotronics building blocks, surface and size-scale effects.

    Science.gov (United States)

    Momeni, Kasra; Attariani, Hamed

    2014-03-14

    One-dimensional (1D) zinc oxide nanostructures are the main components of nanogenerators and central to the emerging field of nanopiezotronics. Understanding the underlying physics and quantifying the electromechanical properties of these structures, the topic of this research study, play a major role in designing next-generation nanoelectromechanical devices. Here, atomistic simulations are utilized to study surface and size-scale effects on the electromechanical response of 1D ZnO nanostructures. It is shown that the mechanical and piezoelectric properties of these structures are controlled by their size, cross-sectional geometry, and loading configuration. The study reveals enhancement of the piezoelectric and elastic modulus of ZnO nanowires (NW) with diameter d > 1 nm, followed by a sudden drop for d < 1 nm due to transformation of NWs to nanotubes (NTs). Degradation of mechanical and piezoelectric properties of ZnO nanobelts (NBs) followed by an enhancement in piezoelectric properties occurs when their lower dimension is reduced to <1 nm. The latter enhancement can be explained in the context of surface reconfiguration and formation of hexagon-tetragon (HT) pairs at the intersection of (21[combining macron]1[combining macron]0) and (011[combining macron]0) planes in NBs. Transition from a surface-reconstructed dominant to a surface-relaxed dominant region is demonstrated for lateral dimensions <1 nm. New phase-transformation (PT) kinetics from piezoelectric wurtzite to nonpiezoelectric body-centered tetragonal (WZ → BCT) and graphite-like phase (WZ → HX) structures occurs in ZnO NWs loaded up to large strains of ∼10%.

  17. Sizing Up the Milky Way: A Bayesian Mixture Model Meta-analysis of Photometric Scale Length Measurements

    Science.gov (United States)

    Licquia, Timothy C.; Newman, Jeffrey A.

    2016-11-01

    The exponential scale length (L d ) of the Milky Way’s (MW’s) disk is a critical parameter for describing the global physical size of our Galaxy, important both for interpreting other Galactic measurements and helping us to understand how our Galaxy fits into extragalactic contexts. Unfortunately, current estimates span a wide range of values and are often statistically incompatible with one another. Here, we perform a Bayesian meta-analysis to determine an improved, aggregate estimate for L d , utilizing a mixture-model approach to account for the possibility that any one measurement has not properly accounted for all statistical or systematic errors. Within this machinery, we explore a variety of ways of modeling the nature of problematic measurements, and then employ a Bayesian model averaging technique to derive net posterior distributions that incorporate any model-selection uncertainty. Our meta-analysis combines 29 different (15 visible and 14 infrared) photometric measurements of L d available in the literature; these involve a broad assortment of observational data sets, MW models and assumptions, and methodologies, all tabulated herein. Analyzing the visible and infrared measurements separately yields estimates for L d of {2.71}-0.20+0.22 kpc and {2.51}-0.13+0.15 kpc, respectively, whereas considering them all combined yields 2.64 ± 0.13 kpc. The ratio between the visible and infrared scale lengths determined here is very similar to that measured in external spiral galaxies. We use these results to update the model of the Galactic disk from our previous work, constraining its stellar mass to be {4.8}-1.1+1.5× {10}10 M ⊙, and the MW’s total stellar mass to be {5.7}-1.1+1.5× {10}10 M ⊙.

  18. Secondary craters from large impacts on Europa and Ganymede: Ejecta size-velocity distributions on icy worlds, and the scaling of ejected blocks

    Science.gov (United States)

    Singer, Kelsi N.; McKinnon, William B.; Nowicki, L. T.

    2013-09-01

    We have mapped fields of secondary craters around three large primary craters on Europa and Ganymede and estimated the size and velocity of the fragments that formed the secondaries using updated scaling equations for ice impacts. We characterize the upper envelope of the fragment size-velocity distribution to obtain a function for the largest fragments at a given ejection velocity. Power-law velocity exponents found in our study of icy satellite secondary fields are compared to the exponents found for similar studies of mercurian, lunar, and martian craters; for all but basin-scale impacts, fragment size decreases more slowly with increasing ejection velocity than on rocky bodies. Spallation theory provides estimates of the size of ejected spall plates at a given velocity, but this theory predicts fragments considerably smaller than are necessary to form most of our observed secondaries. In general, ejecta fragment sizes scale with primary crater diameter and decrease with increasing ejection velocity, υej, by 1/υej or greater, and point-source scaling implies a relation between the two. The largest crater represented in any of these studies, Gilgamesh on Ganymede, exhibits a relatively steep velocity dependence. Extrapolating the results to the escape speed for each icy moon yields the size of the largest fragment that could later re-impact to form a so-called sesquinary crater, either on the parent moon or a neighboring satellite. We find that craters above 2 km in diameter on Europa and Ganymede are unlikely to be sesquinaries.

  19. Pore-Scale Investigation of Micron-Size Polyacrylamide Elastic Microspheres (MPEMs) Transport and Retention in Saturated Porous Media

    KAUST Repository

    Yao, Chuanjin

    2014-05-06

    Knowledge of micrometer-size polyacrylamide elastic microsphere (MPEM) transport and retention mechanisms in porous media is essential for the application of MPEMs as a smart sweep improvement and profile modification agent in improving oil recovery. A transparent micromodel packed with translucent quartz sand was constructed and used to investigate the pore-scale transport, surface deposition-release, and plugging deposition-remigration mechanisms of MPEMs in porous media. The results indicate that the combination of colloidal and hydrodynamic forces controls the deposition and release of MPEMs on pore-surfaces; the reduction of fluid salinity and the increase of Darcy velocity are beneficial to the MPEM release from pore-surfaces; the hydrodynamic forces also influence the remigration of MPEMs in pore-throats. MPEMs can plug pore-throats through the mechanisms of capture-plugging, superposition-plugging, and bridge-plugging, which produces resistance to water flow; the interception with MPEM particulate filters occurring in the interior of porous media can enhance the plugging effect of MPEMs; while the interception with MPEM particulate filters occurring at the surface of low-permeability layer can prevent the low-permeability layer from being damaged by MPEMs. MPEMs can remigrate in pore-throats depending on their elasticity through four steps of capture-plugging, elastic deformation, steady migration, and deformation recovery. © 2014 American Chemical Society.

  20. The relationship between self-reported vividness and latency during mental size scaling of everyday items: phenomenological evidence of different types of imagery.

    Science.gov (United States)

    D'Angiulli, Amedeo; Reeves, Adam

    2007-01-01

    We examined how the relationship between ratings of vividness (or image strength) and image latency might reflect the concerted action of two visual imagery pathways hypothesized by Kosslyn (1994): the ventral pathway, processing object properties, and the dorsal pathway, processing locative properties of mental images. Participants formed their images at small or large angular display sizes, varying the amount of size scaling needed. In Experiment 1, display size varied between participants, and images were trial unique. The higher the vividness, the faster the generation of small images (requiring size scaling of less than 10 degrees), which would recruit mainly the ventral pathway. This vivid-is-fast relationship changed for large images (requiring size scaling of 10 degrees or more), which would recruit mainly the dorsal pathway. The size-dependent alteration of the vivid-is-fast relationship was replicated in the first block of Experiment 2. However, when repeated over 3 consecutive blocks, image generation sped up, and gradually the vivid-is-fast relationship tended to occur for all display sizes until complete automatization of image generation occurred. The findings suggest that differential patterns of vividness-latency relationship can reflect the types of images involved, their relative ventral and dorsal contributions, and the involvement of working memory.

  1. Machine Learning

    Energy Technology Data Exchange (ETDEWEB)

    Chikkagoudar, Satish; Chatterjee, Samrat; Thomas, Dennis G.; Carroll, Thomas E.; Muller, George

    2017-04-21

    The absence of a robust and unified theory of cyber dynamics presents challenges and opportunities for using machine learning based data-driven approaches to further the understanding of the behavior of such complex systems. Analysts can also use machine learning approaches to gain operational insights. In order to be operationally beneficial, cybersecurity machine learning based models need to have the ability to: (1) represent a real-world system, (2) infer system properties, and (3) learn and adapt based on expert knowledge and observations. Probabilistic models and Probabilistic graphical models provide these necessary properties and are further explored in this chapter. Bayesian Networks and Hidden Markov Models are introduced as an example of a widely used data driven classification/modeling strategy.

  2. Laser machining of explosives

    Science.gov (United States)

    Perry, Michael D.; Stuart, Brent C.; Banks, Paul S.; Myers, Booth R.; Sefcik, Joseph A.

    2000-01-01

    The invention consists of a method for machining (cutting, drilling, sculpting) of explosives (e.g., TNT, TATB, PETN, RDX, etc.). By using pulses of a duration in the range of 5 femtoseconds to 50 picoseconds, extremely precise and rapid machining can be achieved with essentially no heat or shock affected zone. In this method, material is removed by a nonthermal mechanism. A combination of multiphoton and collisional ionization creates a critical density plasma in a time scale much shorter than electron kinetic energy is transferred to the lattice. The resulting plasma is far from thermal equilibrium. The material is in essence converted from its initial solid-state directly into a fully ionized plasma on a time scale too short for thermal equilibrium to be established with the lattice. As a result, there is negligible heat conduction beyond the region removed resulting in negligible thermal stress or shock to the material beyond a few microns from the laser machined surface. Hydrodynamic expansion of the plasma eliminates the need for any ancillary techniques to remove material and produces extremely high quality machined surfaces. There is no detonation or deflagration of the explosive in the process and the material which is removed is rendered inert.

  3. Power Scaling of the Size Distribution of Economic Loss and Fatalities due to Hurricanes, Earthquakes, Tornadoes, and Floods in the USA

    Science.gov (United States)

    Tebbens, S. F.; Barton, C. C.; Scott, B. E.

    2016-12-01

    Traditionally, the size of natural disaster events such as hurricanes, earthquakes, tornadoes, and floods is measured in terms of wind speed (m/sec), energy released (ergs), or discharge (m3/sec) rather than by economic loss or fatalities. Economic loss and fatalities from natural disasters result from the intersection of the human infrastructure and population with the size of the natural event. This study investigates the size versus cumulative number distribution of individual natural disaster events for several disaster types in the United States. Economic losses are adjusted for inflation to 2014 USD. The cumulative number divided by the time over which the data ranges for each disaster type is the basis for making probabilistic forecasts in terms of the number of events greater than a given size per year and, its inverse, return time. Such forecasts are of interest to insurers/re-insurers, meteorologists, seismologists, government planners, and response agencies. Plots of size versus cumulative number distributions per year for economic loss and fatalities are well fit by power scaling functions of the form p(x) = Cx-β; where, p(x) is the cumulative number of events with size equal to and greater than size x, C is a constant, the activity level, x is the event size, and β is the scaling exponent. Economic loss and fatalities due to hurricanes, earthquakes, tornadoes, and floods are well fit by power functions over one to five orders of magnitude in size. Economic losses for hurricanes and tornadoes have greater scaling exponents, β = 1.1 and 0.9 respectively, whereas earthquakes and floods have smaller scaling exponents, β = 0.4 and 0.6 respectively. Fatalities for tornadoes and floods have greater scaling exponents, β = 1.5 and 1.7 respectively, whereas hurricanes and earthquakes have smaller scaling exponents, β = 0.4 and 0.7 respectively. The scaling exponents can be used to make probabilistic forecasts for time windows ranging from 1 to 1000 years

  4. Impedance Scaling and Impedance Control

    Energy Technology Data Exchange (ETDEWEB)

    Chou, W.; Griffin, J.

    1997-06-01

    When a machine becomes really large, such as the Very Large Hadron Collider (VLHC), of which the circumference could reach the order of megameters, beam instability could be an essential bottleneck. This paper studies the scaling of the instability threshold vs. machine size when the coupling impedance scales in a ``normal`` way. It is shown that the beam would be intrinsically unstable for the VLHC. As a possible solution to this problem, it is proposed to introduce local impedance inserts for controlling the machine impedance. In the longitudinal plane, this could be done by using a heavily detuned rf cavity (e.g., a biconical structure), which could provide large imaginary impedance with the right sign (i.e., inductive or capacitive) while keeping the real part small. In the transverse direction, a carefully designed variation of the cross section of a beam pipe could generate negative impedance that would partially compensate the transverse impedance in one plane.

  5. Machine testning

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo

    This document is used in connection with a laboratory exercise of 3 hours duration as a part of the course GEOMETRICAL METROLOGY AND MACHINE TESTING. The exercise includes a series of tests carried out by the student on a conventional and a numerically controled lathe, respectively. This document...

  6. Representational Machines

    DEFF Research Database (Denmark)

    Petersson, Dag; Dahlgren, Anna; Vestberg, Nina Lager

    to the enterprises of the medium. This is the subject of Representational Machines: How photography enlists the workings of institutional technologies in search of establishing new iconic and social spaces. Together, the contributions to this edited volume span historical epochs, social environments, technological...

  7. Experimental and theoretical studies of scaling of sizes and intrinsic viscosity of hyperbranched chains in good solvents

    Science.gov (United States)

    Li, Lianwei; Lu, Yuyuan; An, Lijia; Wu, Chi

    2013-03-01

    Using a set of hyperbranched polystyrenes with different overall molar masses but a uniform subchain length or a similar overall molar mass but different subchain lengths, we studied their sizes and hydrodynamic behaviors in toluene (a good solvent) at T = 25 °C by combining experimental (laser light scattering (LLS) and viscometry) and theoretical methods based on a partially permeable sphere model. Our results show that both the average radii of gyration (⟨Rg⟩) and hydrodynamic radius (⟨Rh⟩) are scaled to the weight-average molar mass (Mw) as ⟨Rg⟩ ˜ ⟨Rh⟩ ˜ MwγMw,sφ, with γ = 0.47 ± 0.01 and φ = 0.10 ± 0.01; and their intrinsic viscosity ([η]) quantitatively follow the Mark-Houwink-Sakurada (MHS) equation as [η] = KηMwνMw,sμ with Kη = 2.26 × 10-5, ν = 0.39 ± 0.01, and μ = 0.31 ± 0.01, revealing that these model chains with long subchains are indeed fractal objects. Further, our theoretical and experimental results broadly agree with each other besides a slight deviation from the MHS equation for short subchains, similar to dendrimers, presumably due to the multi-body hydrodynamic interaction. Moreover, we also find that the average viscometric radius (⟨Rη⟩) determined from intrinsic viscosity is slightly smaller than ⟨Rh⟩ measured in dynamic LLS and their ratio (⟨Rη⟩/⟨Rh⟩) roughly remains 0.95 ± 0.05, reflecting that linear polymer chains are more draining with a smaller ⟨Rh⟩ than their hyperbranched counterparts for a given intrinsic viscosity. Our current study of the "defect-free" hyperbranched polymer chains offers a standard model for further theoretical investigation of hydrodynamic behaviors of hyperbranched polymers and other complicated architectures, in a remaining unexploited research field of polymer science.

  8. The importance of leading edge vortices under simplified flapping flight conditions at the size scale of birds.

    Science.gov (United States)

    Hubel, Tatjana Y; Tropea, Cameron

    2010-06-01

    Over the last decade, interest in animal flight has grown, in part due to the possible use of flapping propulsion for micro air vehicles. The importance of unsteady lift-enhancing mechanisms in insect flight has been recognized, but unsteady effects were generally thought to be absent for the flapping flight of larger animals. Only recently has the existence of LEVs (leading edge vortices) in small vertebrates such as swifts, small bats and hummingbirds been confirmed. To study the relevance of unsteady effects at the scale of large birds [reduced frequency k between 0.05 and 0.3, k=(pifc)/U(infinity); f is wingbeat frequency, U(infinity) is free-stream velocity, and c is the average wing chord], and the consequences of the lack of kinematic and morphological refinements, we have designed a simplified goose-sized flapping model for wind tunnel testing. The 2-D flow patterns along the wing span were quantitatively visualized using particle image velocimetry (PIV), and a three-component balance was used to measure the forces generated by the wings. The flow visualization on the wing showed the appearance of LEVs, which is typically associated with a delayed stall effect, and the transition into flow separation. Also, the influence of the delayed stall and flow separation was clearly visible in measurements of instantaneous net force over the wingbeat cycle. Here, we show that, even at reduced frequencies as low as those of large bird flight, unsteady effects are present and non-negligible and have to be addressed by kinematic and morphological adaptations.

  9. The coupled effects of environmental composition, temperature and contact size-scale on the tribology of molybdenum disulfide

    Science.gov (United States)

    Khare, Harmandeep S.

    combination of both surface adsorption and diffusion into the coating subsurface. Thermally activated desiccation effectively dries the bulk of the coating, yielding low values of friction coefficient even at ambient humidity and temperature. Friction of MoS2 decreases with increasing temperature between 25°C and 100°C in the presence of environmental water and increases in the presence of oxygen alone. At temperatures greater than 100°C, friction generally increases with temperature only in the presence of environmental oxygen; at these elevated temperatures, friction decreases with increasing humidity. The transition from room-temperature increase to elevated-temperature decrease in friction with increasing humidity is found to be a strong function of the contact history as well as coating microstructure. Lastly, the contribution of nanoscale tribofilms to macroscale friction was studied through nanotribometry. Friction measured on the worn MoS2 coating with a nano-scale AFM probe showed direct and quantifiable evidence of sliding-induced surface modification of MoS2; friction measured on the perfectly ordered single crystal MoS2 was nearly an order of magnitude lower than friction on worn MoS2. Although friction coefficients measured with a nanoscale probe showed high surface sensitivity, micron-sized AFM probes gave friction coefficients similar to those obtained in the macroscale, suggesting the formation of surface films in-situ during sliding with the colloidal probe. A reduction in friction is observed after annealing for both the nanoscale and microscale probes, suggesting a strong overriding effect of the desiccated bulk over surface adsorption in driving the friction response at these length-scales.

  10. Ising universality class for the liquid-liquid critical point of a one component fluid: a finite-size scaling test.

    Science.gov (United States)

    Gallo, Paola; Sciortino, Francesco

    2012-10-26

    We present a finite-size scaling study of the liquid-liquid critical point in the Jagla model, a prototype model for liquids that present the same thermodynamic anomalies which characterize liquid water. Performing successive umbrella sampling grand canonical Monte Carlo simulations, we evaluate an accurate density of states for different system sizes and determine the size-dependent critical parameters. Extrapolation to infinite size provides estimates of the bulk critical values for this model. The finite-size study allows us to establish that critical fluctuations are consistent with the Ising universality class and to provide definitive evidence for the existence of a liquid-liquid critical point in the Jagla potential. This finding supports the possibility of the existence of a genuine liquid-liquid critical point in anomalous one-component liquids like water.

  11. Adding machine and calculating machine

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    In 1642 the French mathematician Blaise Pascal(1623-1662) invented a machine;.that could add and subtract. It had.wheels that each had: 1 to 10 marked off along its circumference. When the wheel at the right, representing units, made one complete circle, it engaged the wheel to its left, represents tens, and moved it forward one notch.

  12. 中型铣刨机噪声现状及噪声源识别%Noise Analysis and Recognition of Noise Sources for Mid-sized Cold Milling Machine

    Institute of Scientific and Technical Information of China (English)

    元小强; 黄志亮; 单欢乐

    2014-01-01

    The comfort of operators who drive construction machines gradually becomes an important index, meantime as for comfort noise is a key index. The thesis analyzed the transfer path of noise, the characteristics of spectrum and the locations of noise sources based on the noise test on mid-sized cold milling machine. The test results indicated that the noise around the top of operator’s seat is from the air outlet under the floor, and its frequency band is low and medium below the center frequency 630Hz, and the main noise sources are cooling fan and engine exhaust.%工程机械操作舒适性日渐成为评价一个产品好坏的重要指标,而噪声指标又是舒适性指标的重要组成部分。本文通过对某中型铣刨机进行噪声试验,研究了铣刨机的噪声源分布、频谱特性及噪声传递路径。试验结果表明:操作者位置上的噪声主要是由散热风扇和发动机排气噪声贡献,其能量主要集中在中心频率630Hz以下的中低频,且主要是从地板下方散热出风口传递过来。

  13. Finite-size scaling as a tool for the search of the critical endpoint of QCD in heavy-ion data

    Science.gov (United States)

    Palhares, L. F.; Fraga, E. S.

    2012-07-01

    We briefly discuss the role played by the finiteness of the system created in high-energy heavyion collisions (HIC's) in the experimental search of the QCD critical endpoint and, in particular, the applicability of the predicting power of finite-size scaling plots in data analysis of current HIC's.

  14. Finite-size scaling as a tool for the search of the critical endpoint of QCD in heavy-ion data

    Energy Technology Data Exchange (ETDEWEB)

    Palhares, L. F., E-mail: leticia@if.ufrj.br [CEA Saclay, Institut de Physique Theorique (France); Fraga, E. S., E-mail: fraga@if.ufrj.br [Universidade Federal do Rio de Janeiro, Instituto de Fisica (Brazil)

    2012-07-15

    We briefly discuss the role played by the finiteness of the system created in high-energy heavyion collisions (HIC's) in the experimental search of the QCD critical endpoint and, in particular, the applicability of the predicting power of finite-size scaling plots in data analysis of current HIC's.

  15. Estimation of rain kinetic energy from radar reflectivity and/or rain rate based on a scaling formulation of the raindrop size distribution

    NARCIS (Netherlands)

    Yu, N.; Boudevillain, B.; Delrieu, G.; Uijlenhoet, R.

    2012-01-01

    This study offers an approach to estimate the rainfall kinetic energy (KE) by rain intensity (R) and radar reflectivity factor (Z) separately or jointly on the basis of a one- or two-moment scaled raindrop size distribution (DSD) formulation, which contains (1) R and/or Z observations and (2) the

  16. Effects of pore-scale dispersion, degree of heterogeneity, sampling size, and source volume on the concentration moments of conservative solutes in heterogeneous formations

    Science.gov (United States)

    Daniele Tonina; Alberto Bellin

    2008-01-01

    Pore-scale dispersion (PSD), aquifer heterogeneity, sampling volume, and source size influence solute concentrations of conservative tracers transported in heterogeneous porous formations. In this work, we developed a new set of analytical solutions for the concentration ensemble mean, variance, and coefficient of variation (CV), which consider the effects of all these...

  17. Genesis machines

    CERN Document Server

    Amos, Martyn

    2014-01-01

    Silicon chips are out. Today's scientists are using real, wet, squishy, living biology to build the next generation of computers. Cells, gels and DNA strands are the 'wetware' of the twenty-first century. Much smaller and more intelligent, these organic computers open up revolutionary possibilities. Tracing the history of computing and revealing a brave new world to come, Genesis Machines describes how this new technology will change the way we think not just about computers - but about life itself.

  18. Non-linear scaling of oxygen consumption and heart rate in a very large cockroach species (Gromphadorhina portentosa): correlated changes with body size and temperature.

    Science.gov (United States)

    Streicher, Jeffrey W; Cox, Christian L; Birchard, Geoffrey F

    2012-04-01

    Although well documented in vertebrates, correlated changes between metabolic rate and cardiovascular function of insects have rarely been described. Using the very large cockroach species Gromphadorhina portentosa, we examined oxygen consumption and heart rate across a range of body sizes and temperatures. Metabolic rate scaled positively and heart rate negatively with body size, but neither scaled linearly. The response of these two variables to temperature was similar. This correlated response to endogenous (body mass) and exogenous (temperature) variables is likely explained by a mutual dependence on similar metabolic substrate use and/or coupled regulatory pathways. The intraspecific scaling for oxygen consumption rate showed an apparent plateauing at body masses greater than about 3 g. An examination of cuticle mass across all instars revealed isometric scaling with no evidence of an ontogenetic shift towards proportionally larger cuticles. Published oxygen consumption rates of other Blattodea species were also examined and, as in our intraspecific examination of G. portentosa, the scaling relationship was found to be non-linear with a decreasing slope at larger body masses. The decreasing slope at very large body masses in both intraspecific and interspecific comparisons may have important implications for future investigations of the relationship between oxygen transport and maximum body size in insects.

  19. Making extreme computations possible with virtual machines

    Energy Technology Data Exchange (ETDEWEB)

    Reuter, J.; Chokoufe Nejad, B. [DESY, Hamburg (Germany). Theory Group; Ohl, T. [Wuerzburg Univ. (Germany)

    2016-02-15

    State-of-the-art algorithms generate scattering amplitudes for high-energy physics at leading order for high-multiplicity processes as compiled code (in Fortran, C or C++). For complicated processes the size of these libraries can become tremendous (many GiB). We show that amplitudes can be translated to byte-code instructions, which even reduce the size by one order of magnitude. The byte-code is interpreted by a Virtual Machine with runtimes comparable to compiled code and a better scaling with additional legs. We study the properties of this algorithm, as an extension of the Optimizing Matrix Element Generator (O'Mega). The bytecode matrix elements are available as alternative input for the event generator WHIZARD. The bytecode interpreter can be implemented very compactly, which will help with a future implementation on massively parallel GPUs.

  20. Making extreme computations possible with virtual machines

    Science.gov (United States)

    Reuter, J.; Chokoufe Nejad, B.; Ohl, T.

    2016-10-01

    State-of-the-art algorithms generate scattering amplitudes for high-energy physics at leading order for high-multiplicity processes as compiled code (in Fortran, C or C++). For complicated processes the size of these libraries can become tremendous (many GiB). We show that amplitudes can be translated to byte-code instructions, which even reduce the size by one order of magnitude. The byte-code is interpreted by a Virtual Machine with runtimes comparable to compiled code and a better scaling with additional legs. We study the properties of this algorithm, as an extension of the Optimizing Matrix Element Generator (O'Mega). The bytecode matrix elements are available as alternative input for the event generator WHIZARD. The bytecode interpreter can be implemented very compactly, which will help with a future implementation on massively parallel GPUs.

  1. Evidence for small scale variation in the vertebrate brain: mating strategy and sex affect brain size and structure in wild brown trout (Salmo trutta).

    Science.gov (United States)

    Kolm, N; Gonzalez-Voyer, A; Brelin, D; Winberg, S

    2009-12-01

    The basis for our knowledge of brain evolution in vertebrates rests heavily on empirical evidence from comparative studies at the species level. However, little is still known about the natural levels of variation and the evolutionary causes of differences in brain size and brain structure within-species, even though selection at this level is an important initial generator of macroevolutionary patterns across species. Here, we examine how early life-history decisions and sex are related to brain size and brain structure in wild populations using the existing natural variation in mating strategies among wild brown trout (Salmo trutta). By comparing the brains of precocious fish that remain in the river and sexually mature at a small size with those of migratory fish that migrate to the sea and sexually mature at a much larger size, we show, for the first time in any vertebrate, strong differences in relative brain size and brain structure across mating strategies. Precocious fish have larger brain size (when controlling for body size) but migratory fish have a larger cerebellum, the structure in charge of motor coordination. Moreover, we demonstrate sex-specific differences in brain structure as female precocious fish have a larger brain than male precocious fish while males of both strategies have a larger telencephalon, the cognitive control centre, than females. The differences in brain size and structure across mating strategies and sexes thus suggest the possibility for fine scale adaptive evolution of the vertebrate brain in relation to different life histories.

  2. A New Incremental Support Vector Machine Algorithm

    Directory of Open Access Journals (Sweden)

    Wenjuan Zhao

    2012-10-01

    Full Text Available Support vector machine is a popular method in machine learning. Incremental support vector machine algorithm is ideal selection in the face of large learning data set. In this paper a new incremental support vector machine learning algorithm is proposed to improve efficiency of large scale data processing. The model of this incremental learning algorithm is similar to the standard support vector machine. The goal concept is updated by incremental learning. Each training procedure only includes new training data. The time complexity is independent of whole training set. Compared with the other incremental version, the training speed of this approach is improved and the change of hyperplane is reduced.

  3. AZOrange - High performance open source machine learning for QSAR modeling in a graphical programming environment

    Directory of Open Access Journals (Sweden)

    Stålring Jonna C

    2011-07-01

    Full Text Available Abstract Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the

  4. Nanomedicine: tiny particles and machines give huge gains.

    Science.gov (United States)

    Tong, Sheng; Fine, Eli J; Lin, Yanni; Cradick, Thomas J; Bao, Gang

    2014-02-01

    Nanomedicine is an emerging field that integrates nanotechnology, biomolecular engineering, life sciences and medicine; it is expected to produce major breakthroughs in medical diagnostics and therapeutics. Nano-scale structures and devices are compatible in size with proteins and nucleic acids in living cells. Therefore, the design, characterization and application of nano-scale probes, carriers and machines may provide unprecedented opportunities for achieving a better control of biological processes, and drastic improvements in disease detection, therapy, and prevention. Recent advances in nanomedicine include the development of nanoparticle (NP)-based probes for molecular imaging, nano-carriers for drug/gene delivery, multifunctional NPs for theranostics, and molecular machines for biological and medical studies. This article provides an overview of the nanomedicine field, with an emphasis on NPs for imaging and therapy, as well as engineered nucleases for genome editing. The challenges in translating nanomedicine approaches to clinical applications are discussed.

  5. Nanometer-scale sizing accuracy of particle suspensions on an unmodified cell phone using elastic light scattering.

    Directory of Open Access Journals (Sweden)

    Zachary J Smith

    Full Text Available We report on the construction of a Fourier plane imaging system attached to a cell phone. By illuminating particle suspensions with a collimated beam from an inexpensive diode laser, angularly resolved scattering patterns are imaged by the phone's camera. Analyzing these patterns with Mie theory results in predictions of size distributions of the particles in suspension. Despite using consumer grade electronics, we extracted size distributions of sphere suspensions with better than 20 nm accuracy in determining the mean size. We also show results from milk, yeast, and blood cells. Performing these measurements on a portable device presents opportunities for field-testing of food quality, process monitoring, and medical diagnosis.

  6. Simulating Turing machines on Maurer machines

    NARCIS (Netherlands)

    Bergstra, J.A.; Middelburg, C.A.

    2008-01-01

    In a previous paper, we used Maurer machines to model and analyse micro-architectures. In the current paper, we investigate the connections between Turing machines and Maurer machines with the purpose to gain an insight into computability issues relating to Maurer machines. We introduce ways to

  7. Application of Network Scale Up Method in the Estimation of Population Size for Men Who Have Sex with Men in Shanghai, China.

    Directory of Open Access Journals (Sweden)

    Jun Wang

    Full Text Available Men who have sex with men (MSM are at high risk of HIV infection. For developing proper interventions, it is important to know the size of MSM population. However, size estimation of MSM populations is still a significant public health challenge due to high cost, hard to reach and stigma associated with the population.We aimed to estimate the social network size (c value in general population and the size of MSM population in Shanghai, China by using the net work scale-up method.A multistage random sampling was used to recruit participants aged from 18 to 60 years who had lived in Shanghai for at least 6 months. The "known population method" with adjustment of backward estimation and regression model was applied to estimate the c value. And the MSM population size was further estimated using an adjusted c value taking into account for the transmission effect through social respect level towards MSM.A total of 4017 participants were contacted for an interview, and 3907 participants met the inclusion criterion. The social network size (c value of participants was 236 after adjustment. The estimated size of MSM was 36354 (95% CI: 28489-44219 for the male Shanghaies aged 18 to 60 years, and the proportion of MSM among the total male population aged 18 to 60 years in Shanghai was 0.28%.We employed the network scale-up method and used a wide range of data sources to estimate the size of MSM population in Shanghai, which is useful for HIV prevention and intervention among the target population.

  8. International orientation and export commitment in fast small and medium size firms internationalization: scales validation and implications for the Brazilian case

    Directory of Open Access Journals (Sweden)

    Marcelo André Machado

    Full Text Available Abstract A set of changes in the competitive environment has recently provoked the emergence of a new kind of organization that has since its creation a meaningful share of its revenue being originated from international activities developed in more than one continent. Within this new reality, the internationalization of the firm in phases or according to its growth has resulted in it losing its capacity to explain this process with regard to small- and medium-sized enterprises (SME. Thus, in this paper, the international orientation (IO and export commitment (EC constructs have been revised under a theoretical context of the fast internationalization of medium-sized companies, so as to identify scales that more accurately measure these dimensions in the Brazilian setting. After a literature review and an exploratory research, the IO and EC scales proposed by Knight and Cavusgil (2004 and Shamsuddoha and Ali (2006 were respectively applied to a sample of 398 small- and medium-sized exporting Brazilian companies. In spite of conjunction and situation differences inherent to the Brazilian companies, the selected scales presented high measuring reliability. Furthermore, the field research outcomes provide evidence for the existence of a phenomenon of fast internationalization in medium-sized companies in Brazil, as well as support some theoretical assumptions of other empirical investigations carried out with samples from developed countries.

  9. Environmentally Friendly Machining

    CERN Document Server

    Dixit, U S; Davim, J Paulo

    2012-01-01

    Environment-Friendly Machining provides an in-depth overview of environmentally-friendly machining processes, covering numerous different types of machining in order to identify which practice is the most environmentally sustainable. The book discusses three systems at length: machining with minimal cutting fluid, air-cooled machining and dry machining. Also covered is a way to conserve energy during machining processes, along with useful data and detailed descriptions for developing and utilizing the most efficient modern machining tools. Researchers and engineers looking for sustainable machining solutions will find Environment-Friendly Machining to be a useful volume.

  10. 大口径涂料法水冷金属型离心铸管机的设计%Design of Centrifugal Casting Machine for Pipe of Large Scale S.G.Ductile Cast Iron

    Institute of Scientific and Technical Information of China (English)

    习杰

    2011-01-01

    Design principle and method of centrifugal casting machine for production of pipe of large scale s.g. ductile cast iron with DN1000mm and above by coated water cooled die centrifugal casting have been mainly introduced.%主要介绍了DN1000以上大口径球墨铸铁管采用涂料法水冷金属型工艺生产时,离心铸管机的设计原理及方法.

  11. There is no universal molecular clock for invertebrates, but rate variation does not scale with body size.

    Science.gov (United States)

    Thomas, Jessica A; Welch, John J; Woolfit, Megan; Bromham, Lindell

    2006-05-01

    The existence of a universal molecular clock has been called into question by observations that substitution rates vary widely between lineages. However, increasing empirical evidence for the systematic effects of different life history traits on the rate of molecular evolution has raised hopes that rate variation may be predictable, potentially allowing the "correction" of the molecular clock. One such example is the body size trend observed in vertebrates; smaller species tend to have faster rates of molecular evolution. This effect has led to the proposal of general predictive models correcting for rate heterogeneity and has also been invoked to explain discrepancies between molecular and paleontological dates for explosive radiations in the fossil record. Yet, there have been no tests of an effect in any nonvertebrate taxa. In this study, we have tested the generality of the body size effect by surveying a wide range of invertebrate metazoan lineages. DNA sequences and body size data were collected from the literature for 330 species across five phyla. Phylogenetic comparative methods were used to investigate a relationship between average body size and substitution rate at both interspecies and interfamily comparison levels. We demonstrate significant rate variation in all phyla and most genes examined, implying a strict molecular clock cannot be assumed for the Metazoa. Furthermore, we find no evidence of any influence of body size on invertebrate substitution rates. We conclude that the vertebrate body size effect is a special case, which cannot be simply extrapolated to the rest of the animal kingdom.

  12. Machine Transliteration

    CERN Document Server

    Knight, K; Knight, Kevin; Graehl, Jonathan

    1997-01-01

    It is challenging to translate names and technical terms across languages with different alphabets and sound inventories. These items are commonly transliterated, i.e., replaced with approximate phonetic equivalents. For example, "computer" in English comes out as "konpyuutaa" in Japanese. Translating such items from Japanese back to English is even more challenging, and of practical interest, as transliterated items make up the bulk of text phrases not found in bilingual dictionaries. We describe and evaluate a method for performing backwards transliterations by machine. This method uses a generative model, incorporating several distinct stages in the transliteration process.

  13. 基于支持向量机的公路大中修养护费用预测%Large and Medium-sized Repair and Maintenance Cost Forecasting Based on Support Vector Machine

    Institute of Scientific and Technical Information of China (English)

    龚静; 徐柏才; 崔恒凤; 孙玮

    2015-01-01

    This paper uses the least squares support vector machine to predict the costs, analyzes the basic principle of the algorithm, constructs the characteristic index system and cost prediction model of the large and medium-sized repair and maintenance cast of normal road, prepares a solver by MATLAB, gets the predicted values of large and medium-sized repair and maintenance costs, and compares with the results got by the general prediction method to illustrate the superiority of the algorithm. Using this algorithm can improve the prediction accuracy, so it can provide reference for the actual development of maintenance cost plan.%本文采用小样本最佳通用学习方法,即最小二乘支持向量机进行费用预测。分析了算法的基本原理,构建了普通公路大中修养护费用的特征指标体系与费用预测模型,借助MATLAB编制了求解程序,得到了大中修养护费用预测值,并与一般预测方法得到的结果相比较,说明了算法的优越性。利用该算法能够有效提高预测的精度,因此能够为实际制定养护费用计划提供参考。

  14. Vine planting rights, farm size and economic performance: do economies of scale matter in the French viticulture sector?

    OpenAIRE

    Bernard Delord; Étienne Montaigne; Alfredo Coelho

    2014-01-01

    This paper assesses the existence of both greater profitability for large-scale farms and economies of scale in the French viticulture sector, thereby confirming or invalidating the argument put forward by the European Commission to justify the abolition of vine planting rights. According to this argument (1) economic efficiency increases with the extension of the vine area in vineyards, and (2) vine planting rights prevent the expansion of farms. This article discusses the issue of econom...

  15. Machine Protection

    CERN Document Server

    Schmidt, R

    2014-01-01

    The protection of accelerator equipment is as old as accelerator technology and was for many years related to high-power equipment. Examples are the protection of powering equipment from overheating (magnets, power converters, high-current cables), of superconducting magnets from damage after a quench and of klystrons. The protection of equipment from beam accidents is more recent. It is related to the increasing beam power of high-power proton accelerators such as ISIS, SNS, ESS and the PSI cyclotron, to the emission of synchrotron light by electron–positron accelerators and FELs, and to the increase of energy stored in the beam (in particular for hadron colliders such as LHC). Designing a machine protection system requires an excellent understanding of accelerator physics and operation to anticipate possible failures that could lead to damage. Machine protection includes beam and equipment monitoring, a system to safely stop beam operation (e.g. dumping the beam or stopping the beam at low energy) and an ...

  16. Rapid Response Small Machining NNR Project 703025

    Energy Technology Data Exchange (ETDEWEB)

    Kanies, Tim

    2008-12-05

    This project was an effort to develop a machining area for small sized parts that is capable of delivering product with a quick response time. This entailed focusing efforts on leaning out specific work cells that would result in overall improvement to the entire machining area. This effort involved securing the most efficient available technologies for these areas. In the end, this incorporated preparing the small machining area for transformation to a new facility.

  17. Machine learning for medical images analysis.

    Science.gov (United States)

    Criminisi, A

    2016-10-01

    This article discusses the application of machine learning for the analysis of medical images. Specifically: (i) We show how a special type of learning models can be thought of as automatically optimized, hierarchically-structured, rule-based algorithms, and (ii) We discuss how the issue of collecting large labelled datasets applies to both conventional algorithms as well as machine learning techniques. The size of the training database is a function of model complexity rather than a characteristic of machine learning methods.

  18. Application of scaling and kinetic equations to helium cluster size distributions: Homogeneous nucleation of a nearly ideal gas.

    Science.gov (United States)

    Chaiken, J; Goodisman, J; Kornilov, Oleg; Peter Toennies, J

    2006-08-21

    A previously published model of homogeneous nucleation [Villarica et al., J. Chem. Phys. 98, 4610 (1993)] based on the Smoluchowski [Phys. Z. 17, 557 (1916)] equations is used to simulate the experimentally measured size distributions of 4He clusters produced in free jet expansions. The model includes only binary collisions and does not consider evaporative effects, so that binary reactive collisions are rate limiting for formation of all cluster sizes despite the need for stabilization of nascent clusters. The model represents these data very well, accounting in some cases for nearly four orders of magnitude in variation in abundance over cluster sizes ranging up to nearly 100 atoms. The success of the model may be due to particularities of 4He clusters, i.e., their very low coalescence exothermicity, and to the low temperature of 6.7 K at which the data were collected.

  19. Detection of atomic scale changes in the free volume void size of three-dimensional colorectal cancer cell culture using positron annihilation lifetime spectroscopy.

    Science.gov (United States)

    Axpe, Eneko; Lopez-Euba, Tamara; Castellanos-Rubio, Ainara; Merida, David; Garcia, Jose Angel; Plaza-Izurieta, Leticia; Fernandez-Jimenez, Nora; Plazaola, Fernando; Bilbao, Jose Ramon

    2014-01-01

    Positron annihilation lifetime spectroscopy (PALS) provides a direct measurement of the free volume void sizes in polymers and biological systems. This free volume is critical in explaining and understanding physical and mechanical properties of polymers. Moreover, PALS has been recently proposed as a potential tool in detecting cancer at early stages, probing the differences in the subnanometer scale free volume voids between cancerous/healthy skin samples of the same patient. Despite several investigations on free volume in complex cancerous tissues, no positron annihilation studies of living cancer cell cultures have been reported. We demonstrate that PALS can be applied to the study in human living 3D cell cultures. The technique is also capable to detect atomic scale changes in the size of the free volume voids due to the biological responses to TGF-β. PALS may be developed to characterize the effect of different culture conditions in the free volume voids of cells grown in vitro.

  20. Analysis of machining and machine tools

    CERN Document Server

    Liang, Steven Y

    2016-01-01

    This book delivers the fundamental science and mechanics of machining and machine tools by presenting systematic and quantitative knowledge in the form of process mechanics and physics. It gives readers a solid command of machining science and engineering, and familiarizes them with the geometry and functionality requirements of creating parts and components in today’s markets. The authors address traditional machining topics, such as: single and multiple point cutting processes grinding components accuracy and metrology shear stress in cutting cutting temperature and analysis chatter They also address non-traditional machining, such as: electrical discharge machining electrochemical machining laser and electron beam machining A chapter on biomedical machining is also included. This book is appropriate for advanced undergraduate and graduate mechani cal engineering students, manufacturing engineers, and researchers. Each chapter contains examples, exercises and their solutions, and homework problems that re...

  1. Hidden zero-temperature bicritical point in the two-dimensional anisotropic Heisenberg model: Monte Carlo simulations and proper finite-size scaling

    OpenAIRE

    Zhou, Chenggang; Landau, D. P.; Schulthess, Thomas C.

    2006-01-01

    By considering the appropriate finite-size effect, we explain the connection between Monte Carlo simulations of two-dimensional anisotropic Heisenberg antiferromagnet in a field and the early renormalization group calculation for the bicritical point in $2+\\epsilon$ dimensions. We found that the long length scale physics of the Monte Carlo simulations is indeed captured by the anisotropic nonlinear $\\sigma$ model. Our Monte Carlo data and analysis confirm that the bicritical point in two dime...

  2. Size variation and collapse of emphysema holes at inspiration and expiration CT scan: evaluation with modified length scale method and image co-registration.

    Science.gov (United States)

    Oh, Sang Young; Lee, Minho; Seo, Joon Beom; Kim, Namkug; Lee, Sang Min; Lee, Jae Seung; Oh, Yeon Mok

    2017-01-01

    A novel approach of size-based emphysema clustering has been developed, and the size variation and collapse of holes in emphysema clusters are evaluated at inspiratory and expiratory computed tomography (CT). Thirty patients were visually evaluated for the size-based emphysema clustering technique and a total of 72 patients were evaluated for analyzing collapse of the emphysema hole in this study. A new approach for the size differentiation of emphysema holes was developed using the length scale, Gaussian low-pass filtering, and iteration approach. Then, the volumetric CT results of the emphysema patients were analyzed using the new method, and deformable registration was carried out between inspiratory and expiratory CT. Blind visual evaluations of EI by two readers had significant correlations with the classification using the size-based emphysema clustering method (r-values of reader 1: 0.186, 0.890, 0.915, and 0.941; reader 2: 0.540, 0.667, 0.919, and 0.942). The results of collapse of emphysema holes using deformable registration were compared with the pulmonary function test (PFT) parameters using the Pearson's correlation test. The mean extents of low-attenuation area (LAA), E1 (size variation and collapse of emphysema holes may be useful for understanding the dynamic collapse of emphysema and its functional relation.

  3. Power-law Scaling of Fracture Aperture Sizes in Otherwise-Undeformed Foreland Basin Sandstone: An Example From the Cozzette Sandstone, Piceance Basin, Colorado

    Science.gov (United States)

    Hooker, J. N.; Gale, J. F.; Laubach, S. E.; Gomez, L. A.; Marrett, R.; Reed, R. M.

    2007-12-01

    Power-law variation of aperture size with cumulative frequency has been documented in vein arrays, but such patterns have not been conclusively demonstrated from open or incompletely mineralized opening-mode fractures (joints) in otherwise-undeformed sedimentary rocks. We used subhorizontal core from the nearly flat- lying Cretaceous Cozzette Sandstone, Piceance Basin, Colorado, to document fracture aperture sizes over five orders of magnitude. We measured microfractures (0.0004-0.1164 mm in aperture) along a 276-mm-long scanline using scanning electron microscope-based cathodoluminescence; we measured macrofractures (0.5- 2.15 mm in aperture) in 35 m of approximately horizontal core cut normal to fracture strike. Microfractures are typically filled with quartz. Macrofractures are mostly open and resemble non-mineralized joints, except for thin veneers of quartz cement lining their walls. Micro- and macrofractures share both a common orientation and the same timing with respect to diagenetic sequence, only differing in size and the degree to which they are filled with quartz cement. Power-law scaling equations were derived by fitting trendlines to aperture vs. cumulative frequency data for the microfractures. These equations successfully predicted the cumulative frequencies of the macrofractures, accurate to within a factor of four in each test and within a factor of two in 75 percent of tests. Our results show that tectonic deformation is not prerequisite for power-law scaling of fractures, but instead suggest that scaling emerges from fracture interaction during propagation.

  4. Estimating sample size for landscape-scale mark-recapture studies of North American migratory tree bats

    Science.gov (United States)

    Ellison, Laura E.; Lukacs, Paul M.

    2014-01-01

    Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.

  5. Aerosols generated during beryllium machining.

    Science.gov (United States)

    Martyny, J W; Hoover, M D; Mroz, M M; Ellis, K; Maier, L A; Sheff, K L; Newman, L S

    2000-01-01

    Some beryllium processes, especially machining, are associated with an increased risk of beryllium sensitization and disease. Little is known about exposure characteristics contributing to risk, such as particle size. This study examined the characteristics of beryllium machining exposures under actual working conditions. Stationary samples, using eight-stage Lovelace Multijet Cascade Impactors, were taken at the process point of operation and at the closest point that the worker would routinely approach. Paired samples were collected at the operator's breathing zone by using a Marple Personal Cascade Impactor and a 35-mm closed-faced cassette. More than 50% of the beryllium machining particles in the breathing zone were less than 10 microns in aerodynamic diameter. This small particle size may result in beryllium deposition into the deepest portion of the lung and may explain elevated rates of sensitization among beryllium machinists.

  6. Machinability of Green Powder Metallurgy Components: Part II. Sintered Properties of Components Machined in Green State

    Science.gov (United States)

    Robert-Perron, Etienne; Blais, Carl; Pelletier, Sylvain; Thomas, Yannig

    2007-06-01

    The green machining process is virtually a must if the powder metallurgy (PM) industries are to solve the lower machining performances associated with PM components. This process is known for lowering the rate of tool wear. Recent improvements in binder/lubricant technologies have led to high-green-strength systems that enable green machining. Combined with the optimized cutting parameters determined in Part I of the study, the green machining of PM components seems to be a viable process for fabricating high performance parts on large scale and complete other shaping processes. This second part of our study presents a comparison between the machining behaviors and the sintered properties of components machined prior to or after sintering. The results show that the radial crush strength measured on rings machined in their green state is equal to that of parts machined after sintering.

  7. First measurement of the small-scale spatial variability of the rain drop size distribution: Results from a crucial experiment and maximum entropy modeling

    CERN Document Server

    Checa-Garcia, Ramiro

    2013-01-01

    The main challenges of measuring precipitation are related to the spatio-temporal variability of the drop-size distribution, to the uncertainties that condition the modeling of that distribution, and to the instrumental errors present in the in situ estimations. This PhD dissertation proposes advances in all these questions. The relevance of the spatial variability of the drop-size distribution for remote sensing measurements and hydro-meteorology field studies is asserted by analyzing the measurement of a set of disdrometers deployed on a network of 5 squared kilometers. This study comprises the spatial variability of integral rainfall parameters, the ZR relationships, and the variations within the one moment scaling method. The modeling of the drop-size distribution is analyzed by applying the MaxEnt method and comparing it with the methods of moments and the maximum likelihood. The instrumental errors are analyzed with a compressive comparison of sampling and binning uncertainties that affect actual device...

  8. Nano-size scaling of alloy intra-particle vs. inter-particle separation transitions: prediction of distinctly interface-affected critical behaviour.

    Science.gov (United States)

    Polak, M; Rubinovich, L

    2016-07-21

    Phase-separation second-order transitions in binary alloy particles consisting of ∼1000 up to ∼70 000 atoms (∼1-10 nm) are modeled focusing on the unexplored issue of finite-size scaling in such systems, particularly on evaluation of correlation-length critical exponents. Our statistical-thermodynamic approach is based on mean-field analytical expression for the Ising model free energy that facilitates highly efficient computations furnishing comprehensive data for fcc rectangular nanoparticles (NPs). These are summed up in intra- and inter-particle scaling plots as well as in nanophase separation diagrams. Temperature-induced variations in the interface thickness in Janus-type intra-particle configurations and NP size-dependent shifts in the critical temperature of their transition to solid-solution reflect power-law behavior with the same critical exponent, ν = 0.83. It is attributed to dominant interfacial effects that are absent in inter-particle transitions. Variations in ν with nano-size, as revealed by a refined analysis, are linearly extrapolated in order to bridge the gap to larger particles within and well beyond the nanoscale, ultimately yielding ν = 1.0. Besides these findings, the study indicates the key role of the surface-area to volume ratio as an effective linear size, revealing a universal, particle-shape independent, nanoscaling of the critical-temperature shifts.

  9. Sizing and Siting of Large-Scale Batteries in Transmission Grids to Optimize the Use of Renewables

    NARCIS (Netherlands)

    Fiorini, Laura; Pagani, Giuliano; Pelacchi, P.; Poli, Davide; Aiello, Marco

    2017-01-01

    Power systems are a recent field of application of Complex Network research, which allows to perform large scale studies and evaluations. Based on this theory, a power grid is modeled as a weighted graph with several kinds of nodes and edges, and further analysis can help in investigating the behavi

  10. Evolution of small-scale magnetic elements in the vicinity of granular-size swirl convective motions

    CERN Document Server

    Dominguez, S Vargas; Balmaceda, L; Cabello, I; Domingo, V

    2014-01-01

    Advances in solar instrumentation have led to a widespread usage of time series to study the dynamics of solar features, specially at small spatial scales and at very fast cadences. Physical processes at such scales are determinant as building blocks for many others occurring from the lower to the upper layers of the solar atmosphere and beyond, ultimately for understanding the bigger picture of solar activity. Ground-based (SST) and space-borne (Hinode) high-resolution solar data are analyzed in a quiet Sun region displaying negative polarity small-scale magnetic concentrations and a cluster of bright points observed in G-band and Ca II H images. The studied region is characterized by the presence of two small-scale convective vortex-type plasma motions, one of which appears to be affecting the dynamics of both, magnetic features and bright points in its vicinity and therefore the main target of our investigations. We followed the evolution of bright points, intensity variations at different atmospheric heig...

  11. Turbulence-enhanced prey encounter rates in larval fish : Effects of spatial scale, larval behaviour and size

    DEFF Research Database (Denmark)

    Kiørboe, Thomas; MacKenzie, Brian

    1995-01-01

    Turbulent water motion has several effects on the feeding ecology of larval fish and other planktivorous predators. In this paper, we consider the appropriate spatial scales for estimating relative velocities between larval fish predators and their prey, and the effect that different choices...

  12. Using a Support Vector Machine and a Land Surface Model to Estimate Large-Scale Passive Microwave Temperatures over Snow-Covered Land in North America

    Science.gov (United States)

    Forman, Barton A.; Reichle, Rolf Helmut

    2014-01-01

    A support vector machine (SVM), a machine learning technique developed from statistical learning theory, is employed for the purpose of estimating passive microwave (PMW) brightness temperatures over snow-covered land in North America as observed by the Advanced Microwave Scanning Radiometer (AMSR-E) satellite sensor. The capability of the trained SVM is compared relative to the artificial neural network (ANN) estimates originally presented in [14]. The results suggest the SVM outperforms the ANN at 10.65 GHz, 18.7 GHz, and 36.5 GHz for both vertically and horizontally-polarized PMW radiation. When compared against daily AMSR-E measurements not used during the training procedure and subsequently averaged across the North American domain over the 9-year study period, the root mean squared error in the SVM output is 8 K or less while the anomaly correlation coefficient is 0.7 or greater. When compared relative to the results from the ANN at any of the six frequency and polarization combinations tested, the root mean squared error was reduced by more than 18 percent while the anomaly correlation coefficient was increased by more than 52 percent. Further, the temporal and spatial variability in the modeled brightness temperatures via the SVM more closely agrees with that found in the original AMSR-E measurements. These findings suggest the SVM is a superior alternative to the ANN for eventual use as a measurement operator within a data assimilation framework.

  13. Two-machine flow shop scheduling integrated with preventive maintenance planning

    Science.gov (United States)

    Wang, Shijin; Liu, Ming

    2016-02-01

    This paper investigates an integrated optimisation problem of production scheduling and preventive maintenance (PM) in a two-machine flow shop with time to failure of each machine subject to a Weibull probability distribution. The objective is to find the optimal job sequence and the optimal PM decisions before each job such that the expected makespan is minimised. To investigate the value of integrated scheduling solution, computational experiments on small-scale problems with different configurations are conducted with total enumeration method, and the results are compared with those of scheduling without maintenance but with machine degradation, and individual job scheduling combined with independent PM planning. Then, for large-scale problems, four genetic algorithm (GA) based heuristics are proposed. The numerical results with several large problem sizes and different configurations indicate the potential benefits of integrated scheduling solution and the results also show that proposed GA-based heuristics are efficient for the integrated problem.

  14. Machine Learning in the Big Data Era: Are We There Yet?

    Energy Technology Data Exchange (ETDEWEB)

    Sukumar, Sreenivas Rangan [ORNL

    2014-01-01

    In this paper, we discuss the machine learning challenges of the Big Data era. We observe that recent innovations in being able to collect, access, organize, integrate, and query massive amounts of data from a wide variety of data sources have brought statistical machine learning under more scrutiny and evaluation for gleaning insights from the data than ever before. In that context, we pose and debate the question - Are machine learning algorithms scaling with the ability to store and compute? If yes, how? If not, why not? We survey recent developments in the state-of-the-art to discuss emerging and outstanding challenges in the design and implementation of machine learning algorithms at scale. We leverage experience from real-world Big Data knowledge discovery projects across domains of national security and healthcare to suggest our efforts be focused along the following axes: (i) the data science challenge - designing scalable and flexible computational architectures for machine learning (beyond just data-retrieval); (ii) the science of data challenge the ability to understand characteristics of data before applying machine learning algorithms and tools; and (iii) the scalable predictive functions challenge the ability to construct, learn and infer with increasing sample size, dimensionality, and categories of labels. We conclude with a discussion of opportunities and directions for future research.

  15. Size effect and scaling power-law for superelasticity in shape-memory alloys at the nanoscale.

    Science.gov (United States)

    Gómez-Cortés, Jose F; Nó, Maria L; López-Ferreño, Iñaki; Hernández-Saz, Jesús; Molina, Sergio I; Chuvilin, Andrey; San Juan, Jose M

    2017-08-01

    Shape-memory alloys capable of a superelastic stress-induced phase transformation and a high displacement actuation have promise for applications in micro-electromechanical systems for wearable healthcare and flexible electronic technologies. However, some of the fundamental aspects of their nanoscale behaviour remain unclear, including the question of whether the critical stress for the stress-induced martensitic transformation exhibits a size effect similar to that observed in confined plasticity. Here we provide evidence of a strong size effect on the critical stress that induces such a transformation with a threefold increase in the trigger stress in pillars milled on [001] L21 single crystals from a Cu-Al-Ni shape-memory alloy from 2 μm to 260 nm in diameter. A power-law size dependence of n = -2 is observed for the nanoscale superelasticity. Our observation is supported by the atomic lattice shearing and an elastic model for homogeneous martensite nucleation.

  16. Thermal, size and surface effects on the nonlinear pull-in of small-scale piezoelectric actuators

    Science.gov (United States)

    SoltanRezaee, Masoud; Ghazavi, Mohammad-Reza

    2017-09-01

    Electrostatically actuated miniature wires/tubes have many operational applications in the high-tech industries. In this research, the nonlinear pull-in instability of piezoelectric thermal small-scale switches subjected to Coulomb and dissipative forces is analyzed using strain gradient and modified couple stress theories. The discretized governing equation is solved numerically by means of the step-by-step linearization method. The correctness of the formulated model and solution procedure is validated through comparison with experimental and several theoretical results. Herein, the length-scale, surface energy, van der Waals attraction and nonlinear curvature are considered in the present comprehensive model and the thermo-electro-mechanical behavior of cantilever piezo-beams are discussed in detail. It is found that the piezoelectric actuation can be used as a design parameter to control the pull-in phenomenon. The obtained results are applicable in stability analysis, practical design and control of actuated miniature intelligent devices.

  17. Effects of Sizes and Conformations of Fish-Scale Collagen Peptides on Facial Skin Qualities and Transdermal Penetration Efficiency

    Directory of Open Access Journals (Sweden)

    Huey-Jine Chai

    2010-01-01

    Full Text Available Fish-scale collagen peptides (FSCPs were prepared using a given combination of proteases to hydrolyze tilapia (Oreochromis sp. scales. FSCPs were determined to stimulate fibroblast cells proliferation and procollagen synthesis in a time- and dose-dependent manner. The transdermal penetration capabilities of the fractionationed FSCPs were evaluated using the Franz-type diffusion cell model. The heavier FSCPs, 3500 and 4500 Da, showed higher cumulative penetration capability as opposed to the lighter FSCPs, 2000 and 1300 Da. In addition, the heavier seemed to preserve favorable coiled structures comparing to the lighter that presents mainly as linear under confocal scanning laser microscopy. FSCPs, particularly the heavier, were concluded to efficiently penetrate stratum corneum to epidermis and dermis, activate fibroblasts, and accelerate collagen synthesis. The heavier outweighs the lighter in transdermal penetration likely as a result of preserving the given desired structure feature.

  18. A small-scale comparison of Iceland scallop size distributions obtained from a camera based autonomous underwater vehicle and dredge survey.

    Science.gov (United States)

    Singh, Warsha; Örnólfsdóttir, Erla B; Stefansson, Gunnar

    2014-01-01

    An approach is developed to estimate size of Iceland scallop shells from AUV photos. A small-scale camera based AUV survey of Iceland scallops was conducted at a defined site off West Iceland. Prior to height estimation of the identified shells, the distortions introduced by the vehicle orientation and the camera lens were corrected. The average AUV pitch and roll was 1.3 and 2.3 deg that resulted in <2% error in ground distance rendering these effects negligible. A quadratic polynomial model was identified for lens distortion correction. This model successfully predicted a theoretical grid from a frame photographed underwater, representing the inherent lens distortion. The predicted shell heights were scaled for the distance from the bottom at which the photos were taken. This approach was validated by height estimation of scallops of known sizes. An underestimation of approximately 0.5 cm was seen, which could be attributed to pixel error, where each pixel represented 0.24 x 0.27 cm. After correcting for this difference the estimated heights ranged from 3.8-9.3 cm. A comparison of the height-distribution from a small-scale dredge survey carried out in the vicinity showed non-overlapping peaks in size distribution, with scallops of a broader size range visible in the AUV survey. Further investigations are necessary to evaluate any underlying bias and to validate how representative these surveys are of the true population. The low resolution images made identification of smaller scallops difficult. Overall, the observations of very few small scallops in both surveys could be attributed to low recruitment levels in the recent years due to the known scallop parasite outbreak in the region.

  19. A small-scale comparison of Iceland scallop size distributions obtained from a camera based autonomous underwater vehicle and dredge survey.

    Directory of Open Access Journals (Sweden)

    Warsha Singh

    Full Text Available An approach is developed to estimate size of Iceland scallop shells from AUV photos. A small-scale camera based AUV survey of Iceland scallops was conducted at a defined site off West Iceland. Prior to height estimation of the identified shells, the distortions introduced by the vehicle orientation and the camera lens were corrected. The average AUV pitch and roll was 1.3 and 2.3 deg that resulted in <2% error in ground distance rendering these effects negligible. A quadratic polynomial model was identified for lens distortion correction. This model successfully predicted a theoretical grid from a frame photographed underwater, representing the inherent lens distortion. The predicted shell heights were scaled for the distance from the bottom at which the photos were taken. This approach was validated by height estimation of scallops of known sizes. An underestimation of approximately 0.5 cm was seen, which could be attributed to pixel error, where each pixel represented 0.24 x 0.27 cm. After correcting for this difference the estimated heights ranged from 3.8-9.3 cm. A comparison of the height-distribution from a small-scale dredge survey carried out in the vicinity showed non-overlapping peaks in size distribution, with scallops of a broader size range visible in the AUV survey. Further investigations are necessary to evaluate any underlying bias and to validate how representative these surveys are of the true population. The low resolution images made identification of smaller scallops difficult. Overall, the observations of very few small scallops in both surveys could be attributed to low recruitment levels in the recent years due to the known scallop parasite outbreak in the region.

  20. Automation of printing machine

    OpenAIRE

    Sušil, David

    2016-01-01

    Bachelor thesis is focused on the automation of the printing machine and comparing the two types of printing machines. The first chapter deals with the history of printing, typesettings, printing techniques and various kinds of bookbinding. The second chapter describes the difference between sheet-fed printing machines and offset printing machines, the difference between two representatives of rotary machines, technological process of the products on these machines, the description of the mac...

  1. Finite size corrections to scaling of the formation probabilities and the Casimir effect in the conformal field theories

    Science.gov (United States)

    Rajabpour, M. A.

    2016-12-01

    We calculate formation probabilities of the ground state of the finite size quantum critical chains using conformal field theory (CFT) techniques. In particular, we calculate the formation probability of one interval in the finite open chain and also formation probability of two disjoint intervals in a finite periodic system. The presented formulas can be also interpreted as the Casimir energy of needles in particular geometries. We numerically check the validity of the exact CFT results in the case of the transverse field Ising chain.

  2. The dilemma of choosing a reference character for measuring sexual size dimorphism, sexual body component dimorphism, and character scaling: cryptic dimorphism and allometry in the scorpion Hadrurus arizonensis.

    Directory of Open Access Journals (Sweden)

    Gerad A Fox

    Full Text Available Sexual differences in morphology, ranging from subtle to extravagant, occur commonly in many animal species. These differences can encompass overall body size (sexual size dimorphism, SSD or the size and/or shape of specific body parts (sexual body component dimorphism, SBCD. Interacting forces of natural and sexual selection shape much of the expression of dimorphism we see, though non-adaptive processes may be involved. Differential scaling of individual features can result when selection favors either exaggerated (positive allometry or reduced (negative allometry size during growth. Studies of sexual dimorphism and character scaling rely on multivariate models that ideally use an unbiased reference character as an overall measure of body size. We explored several candidate reference characters in a cryptically dimorphic taxon, Hadrurus arizonensis. In this scorpion, essentially every body component among the 16 we examined could be interpreted as dimorphic, but identification of SSD and SBCD depended on which character was used as the reference (prosoma length, prosoma area, total length, principal component 1, or metasoma segment 1 width. Of these characters, discriminant function analysis suggested that metasoma segment 1 width was the most appropriate. The pattern of dimorphism in H. arizonensis mirrored that seen in other more obviously dimorphic scorpions, with static allometry trending towards isometry in most characters. Our findings are consistent with the conclusions of others that fecundity selection likely favors a larger prosoma in female scorpions, whereas sexual selection may favor other body parts being larger in males, especially the metasoma, pectines, and possibly the chela. For this scorpion and probably most other organisms, the choice of reference character profoundly affects interpretations of SSD, SBCD, and allometry. Thus, researchers need to broaden their consideration of an appropriate reference and exercise caution

  3. The dilemma of choosing a reference character for measuring sexual size dimorphism, sexual body component dimorphism, and character scaling: cryptic dimorphism and allometry in the scorpion Hadrurus arizonensis.

    Science.gov (United States)

    Fox, Gerad A; Cooper, Allen M; Hayes, William K

    2015-01-01

    Sexual differences in morphology, ranging from subtle to extravagant, occur commonly in many animal species. These differences can encompass overall body size (sexual size dimorphism, SSD) or the size and/or shape of specific body parts (sexual body component dimorphism, SBCD). Interacting forces of natural and sexual selection shape much of the expression of dimorphism we see, though non-adaptive processes may be involved. Differential scaling of individual features can result when selection favors either exaggerated (positive allometry) or reduced (negative allometry) size during growth. Studies of sexual dimorphism and character scaling rely on multivariate models that ideally use an unbiased reference character as an overall measure of body size. We explored several candidate reference characters in a cryptically dimorphic taxon, Hadrurus arizonensis. In this scorpion, essentially every body component among the 16 we examined could be interpreted as dimorphic, but identification of SSD and SBCD depended on which character was used as the reference (prosoma length, prosoma area, total length, principal component 1, or metasoma segment 1 width). Of these characters, discriminant function analysis suggested that metasoma segment 1 width was the most appropriate. The pattern of dimorphism in H. arizonensis mirrored that seen in other more obviously dimorphic scorpions, with static allometry trending towards isometry in most characters. Our findings are consistent with the conclusions of others that fecundity selection likely favors a larger prosoma in female scorpions, whereas sexual selection may favor other body parts being larger in males, especially the metasoma, pectines, and possibly the chela. For this scorpion and probably most other organisms, the choice of reference character profoundly affects interpretations of SSD, SBCD, and allometry. Thus, researchers need to broaden their consideration of an appropriate reference and exercise caution in interpreting

  4. Macchines per scoprire - Discovery Machines

    CERN Multimedia

    Auditorium, Rome

    2016-01-01

    During the FCC week 2016 a public event entitled “Discovery Machines: The Higgs Boson and the Search for New Physics took place on 14 April at the Auditorium in Rome. The event, brought together physicists and experts from economics to discuss intriguing questions on the origin and evolution of the Universe and the societal impact of large-scale research projects.

  5. Polycyclic aromatic hydrocarbons in air on small spatial and temporal scales - II. Mass size distributions and gas-particle partitioning

    Science.gov (United States)

    Lammel, Gerhard; Klánová, Jana; Ilić, Predrag; Kohoutek, Jiří; Gasić, Bojan; Kovacić, Igor; Škrdlíková, Lenka

    2010-12-01

    Polycyclic aromatic hydrocarbons (PAHs) were measured together with inorganic air pollutants at two urban sites and one rural background site in the Banja Luka area, Bosnia and Hercegovina, during 72 h in July 2008 using a high time resolution (5 samples per day) with the aim to study gas-particle partitioning, aerosol mass size distributions and to explore the potential of a higher time resolution (4 h-sampling). In the particulate phase the mass median diameters of the PAHs were found almost exclusively in the accumulation mode (0.1-1.0 μm of size). These were larger for semivolatile PAHs than for non-volatile PAHs. Gas-particle partitioning of semivolatile PAHs was strongly influenced by temperature. The results suggest that the Junge-Pankow model is inadequate to explain the inter-species variation and another process must be significant for phase partitioning which is less temperature sensitive than adsorption. Care should be taken when interpreting slopes m of plots of the type log K p = m log p L0 + b based on 24 h means, as these are found sensitive to the time averaging, i.e. tend to be higher than when based on 12 h-mean samples.

  6. Pelagic larval duration and settlement size of a reef fish are spatially consistent, but post-settlement growth varies at the reef scale

    Science.gov (United States)

    Leahy, Susannah M.; Russ, Garry R.; Abesamis, Rene A.

    2015-12-01

    Recent research has demonstrated that, despite a pelagic larval stage, many coral reef fishes disperse over relatively small distances, leading to well-connected populations on scales of 0-30 km. Although variation in key biological characteristics has been explored on the scale of 100-1000 s of km, it has rarely been explored at the scale relevant to actual larval dispersal and population connectivity on ecological timescales. In this study, we surveyed the habitat and collected specimens ( n = 447) of juvenile butterflyfish, Chaetodon vagabundus, at nine sites along an 80-km stretch of coastline in the central Philippines to identify variation in key life history parameters at a spatial scale relevant to population connectivity. Mean pelagic larval duration (PLD) was 24.03 d (SE = 0.16 d), and settlement size was estimated to be 20.54 mm total length (TL; SE = 0.61 mm). Both traits were spatially consistent, although this PLD is considerably shorter than that reported elsewhere. In contrast, post-settlement daily growth rates, calculated from otolith increment widths from 1 to 50 d post-settlement, varied strongly across the study region. Elevated growth rates were associated with rocky habitats that this species is known to recruit to, but were strongly negatively correlated with macroalgal cover and exhibited negative density dependence with conspecific juveniles. Larger animals had lower early (first 50 d post-settlement) growth rates than smaller animals, even after accounting for seasonal variation in growth rates. Both VBGF and Gompertz models provided good fits to post-settlement size-at-age data ( n = 447 fish), but the VBGF's estimate of asymptotic length ( L ∞ = 168 mm) was more consistent with field observations of maximum fish length. Our findings indicate that larval characteristics are consistent at the spatial scale at which populations are likely well connected, but that site-level biological differences develop post-settlement, most likely as a

  7. Giant Peltier Effect in a Submicron-Sized Cu-Ni/Au Junction with Nanometer-Scale Phase Separation

    Science.gov (United States)

    Sugihara, Atsushi; Kodzuka, Masaya; Yakushiji, Kay; Kubota, Hitoshi; Yuasa, Shinji; Yamamoto, Atsushi; Ando, Koji; Takanashi, Koki; Ohkubo, Tadakatsu; Hono, Kazuhiro; Fukushima, Akio

    2010-06-01

    We observed a giant Peltier effect in a submicron Cu-Ni/Au junction. The Peltier coefficient was evaluated to be 480 mV at room temperature from the balance between Joule heating and the Peltier cooling effect in the junction, which is 40 times that expected from the Seebeck coefficients of bulk Au and Cu-Ni alloy. This giant cooling effect lowered the inner temperature of the junction by 160 K. Microstructure analysis with a three-dimensional atom probe suggested that the giant Peltier effect possibly originated from nanometer-scale phase separation in the Cu-Ni layer.

  8. Machine musicianship

    Science.gov (United States)

    Rowe, Robert

    2002-05-01

    The training of musicians begins by teaching basic musical concepts, a collection of knowledge commonly known as musicianship. Computer programs designed to implement musical skills (e.g., to make sense of what they hear, perform music expressively, or compose convincing pieces) can similarly benefit from access to a fundamental level of musicianship. Recent research in music cognition, artificial intelligence, and music theory has produced a repertoire of techniques that can make the behavior of computer programs more musical. Many of these were presented in a recently published book/CD-ROM entitled Machine Musicianship. For use in interactive music systems, we are interested in those which are fast enough to run in real time and that need only make reference to the material as it appears in sequence. This talk will review several applications that are able to identify the tonal center of musical material during performance. Beyond this specific task, the design of real-time algorithmic listening through the concurrent operation of several connected analyzers is examined. The presentation includes discussion of a library of C++ objects that can be combined to perform interactive listening and a demonstration of their capability.

  9. Remote Machining and Evaluation of Explosively Filled Munitions

    Data.gov (United States)

    Federal Laboratory Consortium — This facility is used for remote machining of explosively loaded ammunition. Munition sizes from small arms through 8-inch artillery can be accommodated. Sectioning,...

  10. Hematoma shape, hematoma size, Glasgow coma scale score and ICH score: which predicts the 30-day mortality better for intracerebral hematoma?

    Directory of Open Access Journals (Sweden)

    Chih-Wei Wang

    Full Text Available To investigate the performance of hematoma shape, hematoma size, Glasgow coma scale (GCS score, and intracerebral hematoma (ICH score in predicting the 30-day mortality for ICH patients. To examine the influence of the estimation error of hematoma size on the prediction of 30-day mortality.This retrospective study, approved by a local institutional review board with written informed consent waived, recruited 106 patients diagnosed as ICH by non-enhanced computed tomography study. The hemorrhagic shape, hematoma size measured by computer-assisted volumetric analysis (CAVA and estimated by ABC/2 formula, ICH score and GCS score was examined. The predicting performance of 30-day mortality of the aforementioned variables was evaluated. Statistical analysis was performed using Kolmogorov-Smirnov tests, paired t test, nonparametric test, linear regression analysis, and binary logistic regression. The receiver operating characteristics curves were plotted and areas under curve (AUC were calculated for 30-day mortality. A P value less than 0.05 was considered as statistically significant.The overall 30-day mortality rate was 15.1% of ICH patients. The hematoma shape, hematoma size, ICH score, and GCS score all significantly predict the 30-day mortality for ICH patients, with an AUC of 0.692 (P = 0.0018, 0.715 (P = 0.0008 (by ABC/2 to 0.738 (P = 0.0002 (by CAVA, 0.877 (P<0.0001 (by ABC/2 to 0.882 (P<0.0001 (by CAVA, and 0.912 (P<0.0001, respectively.Our study shows that hematoma shape, hematoma size, ICH scores and GCS score all significantly predict the 30-day mortality in an increasing order of AUC. The effect of overestimation of hematoma size by ABC/2 formula in predicting the 30-day mortality could be remedied by using ICH score.

  11. Turbulence-enhanced prey encounter rates in larval fish : Effects of spatial scale, larval behaviour and size

    DEFF Research Database (Denmark)

    Kiørboe, Thomas; MacKenzie, Brian

    1995-01-01

    Turbulent water motion has several effects on the feeding ecology of larval fish and other planktivorous predators. In this paper, we consider the appropriate spatial scales for estimating relative velocities between larval fish predators and their prey, and the effect that different choices...... is consistent with classical coagulation theory. We then demonstrate that differences in larval search strategy (pause- travel versus cruise search) and behaviour (e.g. reactive distance, swimming speed, pause duration) will lead to substantial differences in estimated encounter rates. In general, small larvae...... are more likely to benefit from turbulence-increased encounter than larger larvae. Overall ingestion rate probability (= probability of encounter x probability of successful pursuit) is likely to be highest at moderate-high levels of turbulence. In most larval fish habitats, turbulence levels appear to lie...

  12. Electrical machines mathematical fundamentals of machine topologies

    CERN Document Server

    Gerling, Dieter

    2015-01-01

    Electrical Machines and Drives play a powerful role in industry with an ever increasing importance. This fact requires the understanding of machine and drive principles by engineers of many different disciplines. Therefore, this book is intended to give a comprehensive deduction of these principles. Special attention is given to the precise mathematical derivation of the necessary formulae to calculate machines and drives and to the discussion of simplifications (if applied) with the associated limits. The book shows how the different machine topologies can be deduced from general fundamentals, and how they are linked together. This book addresses graduate students, researchers, and developers of Electrical Machines and Drives, who are interested in getting knowledge about the principles of machine and drive operation and in detecting the mathematical and engineering specialties of the different machine and drive topologies together with their mutual links. The detailed - but nevertheless compact - mat...

  13. Two dimensional convolute integers for machine vision and image recognition

    Science.gov (United States)

    Edwards, Thomas R.

    1988-01-01

    Machine vision and image recognition require sophisticated image processing prior to the application of Artificial Intelligence. Two Dimensional Convolute Integer Technology is an innovative mathematical approach for addressing machine vision and image recognition. This new technology generates a family of digital operators for addressing optical images and related two dimensional data sets. The operators are regression generated, integer valued, zero phase shifting, convoluting, frequency sensitive, two dimensional low pass, high pass and band pass filters that are mathematically equivalent to surface fitted partial derivatives. These operators are applied non-recursively either as classical convolutions (replacement point values), interstitial point generators (bandwidth broadening or resolution enhancement), or as missing value calculators (compensation for dead array element values). These operators show frequency sensitive feature selection scale invariant properties. Such tasks as boundary/edge enhancement and noise or small size pixel disturbance removal can readily be accomplished. For feature selection tight band pass operators are essential. Results from test cases are given.

  14. Development of an ultraprecision three axis micromilling machine

    Science.gov (United States)

    Zhang, Peng; Wang, Bo; Liang, Yingchun

    2009-05-01

    To meet the requirement for high efficiency machining of the ultra-precision, ultra-smooth micro structured optical surface, an ultra-precision three axes micro milling machine was developed. The overall size of the machine is 600mm×500mm×700mm and all the strokes of three axes are 75mm. To overcome nonlinearity that always exists in conventional servo mechanism driven by ball screw, permanent-magnet linear motor is used to directly drive the aerostatic bearing slide. Linear encoder with 1.2 nm resolution was used as the feedback of position to buildup closed loop control system. The open architected CNC system is composed of the high performance embedded PMAC motion control card and standard industrial PC, and the control algorithm is based on "PID + velocity/acceleration feed forward + notch filter" strategy. Test results indicate that the positioning accuracy of all the three axes is less than +/-0.25μm, and the repetitive positioning accuracy is less than +/-0.2μm. The machine is proved to achieve nanometer scale through step response and sinusoidal signal track. The preparatory milling experiments with micro cemented carbide milling cutter further proves the processing capacity.

  15. Laser machining of advanced materials

    CERN Document Server

    Dahotre, Narendra B

    2011-01-01

    Advanced materialsIntroductionApplicationsStructural ceramicsBiomaterials CompositesIntermetallicsMachining of advanced materials IntroductionFabrication techniquesMechanical machiningChemical Machining (CM)Electrical machiningRadiation machining Hybrid machiningLaser machiningIntroductionAbsorption of laser energy and multiple reflectionsThermal effectsLaser machining of structural ceramicsIntrodu

  16. Diamond Measuring Machine

    Energy Technology Data Exchange (ETDEWEB)

    Krstulic, J.F.

    2000-01-27

    The fundamental goal of this project was to develop additional capabilities to the diamond measuring prototype, work out technical difficulties associated with the original device, and perform automated measurements which are accurate and repeatable. For this project, FM and T was responsible for the overall system design, edge extraction, and defect extraction and identification. AccuGem provided a lab and computer equipment in Lawrence, 3D modeling, industry expertise, and sets of diamonds for testing. The system executive software which controls stone positioning, lighting, focusing, report generation, and data acquisition was written in Microsoft Visual Basic 6, while data analysis and modeling were compiled in C/C++ DLLs. All scanning parameters and extracted data are stored in a central database and available for automated analysis and reporting. The Phase 1 study showed that data can be extracted and measured from diamond scans, but most of the information had to be manually extracted. In this Phase 2 project, all data required for geometric modeling and defect identification were automatically extracted and passed to a 3D modeling module for analysis. Algorithms were developed which automatically adjusted both light levels and stone focus positioning for each diamond-under-test. After a diamond is analyzed and measurements are completed, a report is printed for the customer which shows carat weight, summarizes stone geometry information, lists defects and their size, displays a picture of the diamond, and shows a plot of defects on a top view drawing of the stone. Initial emphasis of defect extraction was on identification of feathers, pinpoints, and crystals. Defects were plotted color-coded by industry standards for inclusions (red), blemishes (green), and unknown defects (blue). Diamonds with a wide variety of cut quality, size, and number of defects were tested in the machine. Edge extraction, defect extraction, and modeling code were tested for

  17. Finite-Size Scaling Approach for Critical Wetting: Rationalization in Terms of a Bulk Transition with an Order Parameter Exponent Equal to Zero

    Science.gov (United States)

    Albano, Ezequiel V.; Binder, Kurt

    2012-07-01

    Clarification of critical wetting with short-range forces by simulations has been hampered by the lack of accurate methods to locate where the transition occurs. We solve this problem by developing an anisotropic finite-size scaling approach and show that then the wetting transition is a “bulk” critical phenomenon with order parameter exponent equal to zero. For the Ising model in two dimensions, known exact results are straightforwardly reproduced. In three dimensions, it is shown that previous estimates for the location of the transition need revision, but the conclusions about a slow crossover away from mean-field behavior remain unaltered.

  18. Power-Law Scaling of the Impact Crater Size-Frequency Distribution on Pluto: A Preliminary Analysis Based on First Images from New Horizons' Flyby

    Directory of Open Access Journals (Sweden)

    Scholkmann F.

    2016-01-01

    Full Text Available The recent (14 th July 2015 flyby of NASA’s New Horizons spacecraft of the dwarf planet Pluto resulted in the first high-resolution images of the geological surface- features of Pluto. Since previous studies showed that the impact crater size-frequency distribution (SFD of different celestial objects of our solar system follows power-laws, the aim of the present analysis was to determine, for the first time, the power-law scaling behavior for Pluto’s crater SFD based on the first images available in mid-September 2015. The analysis was based on a high-resolution image covering parts of Pluto’s re- gions Sputnik Planum , Al-Idrisi Montes and Voyager Terra . 83 impact craters could be identified in these regions and their diameter ( D was determined. The analysis re- vealed that the crater diameter SFD shows a statistically significant power-law scaling ( α = 2.4926±0.3309 in the interval of D values ranging from 3.75±1.14 km to the largest determined D value in this data set of 37.77 km. The value obtained for the scaling coefficient α is similar to the coefficient determined for the power-law scaling of the crater SFDs from the other celestial objects in our solar system. Further analysis of Pluto’s crater SFD is warranted as soon as new images are received from the spacecraft.

  19. Scale-up of the electrokinetic fence technology for the removal of pesticides. Part II: Does size matter for removal of herbicides?

    Science.gov (United States)

    López-Vizcaíno, R; Risco, C; Isidro, J; Rodrigo, S; Saez, C; Cañizares, P; Navarro, V; Rodrigo, M A

    2017-01-01

    This work reports results of the application of electrokinetic fence technology in a 32 m(3) -prototype which contains soil polluted with 2,4-D and oxyfluorfen, focusing on the evaluation of the mechanisms that describe the removal of these two herbicides and comparing results to those obtained in smaller plants: a pilot-scale mockup (175 L) and a lab-scale soil column (1 L). Results show that electric heating of soil (coupled with the increase in the volatility) is the key to explain the removal of pollutants in the largest scale facility while electrokinetic transport processes are the primary mechanisms that explain the removal of herbicides in the lab-scale plant. 2-D and 3-D maps of the temperature and pollutant concentrations are used in the discussion of results trying to give light about the mechanisms and about how the size of the setup can lead to different conclusions, despite the same processes are occurring in the soil. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Sea-ice floe-size distribution in the context of spontaneous scaling emergence in stochastic systems.

    Science.gov (United States)

    Herman, Agnieszka

    2010-06-01

    Sea-ice floe-size distribution (FSD) in ice-pack covered seas influences many aspects of ocean-atmosphere interactions. However, data concerning FSD in the polar oceans are still sparse and processes shaping the observed FSD properties are poorly understood. Typically, power-law FSDs are assumed although no feasible explanation has been provided neither for this one nor for other properties of the observed distributions. Consequently, no model exists capable of predicting FSD parameters in any particular situation. Here I show that the observed FSDs can be well represented by a truncated Pareto distribution P(x)=x(-1-α) exp[(1-α)/x] , which is an emergent property of a certain group of multiplicative stochastic systems, described by the generalized Lotka-Volterra (GLV) equation. Building upon this recognition, a possibility of developing a simple agent-based GLV-type sea-ice model is considered. Contrary to simple power-law FSDs, GLV gives consistent estimates of the total floe perimeter, as well as floe-area distribution in agreement with observations.