WorldWideScience

Sample records for high-performance thousand-metre scale

  1. MetReS, an Efficient Database for Genomic Applications.

    Science.gov (United States)

    Vilaplana, Jordi; Alves, Rui; Solsona, Francesc; Mateo, Jordi; Teixidó, Ivan; Pifarré, Marc

    2018-02-01

    MetReS (Metabolic Reconstruction Server) is a genomic database that is shared between two software applications that address important biological problems. Biblio-MetReS is a data-mining tool that enables the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (re)annotation of proteomes, to properly identify both the individual proteins involved in the processes of interest and their function. The main goal of this work was to identify the areas where the performance of the MetReS database performance could be improved and to test whether this improvement would scale to larger datasets and more complex types of analysis. The study was started with a relational database, MySQL, which is the current database server used by the applications. We also tested the performance of an alternative data-handling framework, Apache Hadoop. Hadoop is currently used for large-scale data processing. We found that this data handling framework is likely to greatly improve the efficiency of the MetReS applications as the dataset and the processing needs increase by several orders of magnitude, as expected to happen in the near future.

  2. Observed metre scale horizontal variability of elemental carbon in surface snow

    International Nuclear Information System (INIS)

    Svensson, J; Lihavainen, H; Ström, J; Hansson, M; Kerminen, V-M

    2013-01-01

    Surface snow investigated for its elemental carbon (EC) concentration, based on a thermal–optical method, at two different sites during winter and spring of 2010 demonstrates metre scale horizontal variability in concentration. Based on the two sites sampled, a clean and a polluted site, the clean site (Arctic Finland) presents the greatest variability. In side-by-side ratios between neighbouring samples, 5 m apart, a ratio of around two was observed for the clean site. The median for the polluted site had a ratio of 1.2 between neighbouring samples. The results suggest that regions exposed to snowdrift may be more sensitive to horizontal variability in EC concentration. Furthermore, these results highlight the importance of carefully choosing sampling sites and timing, as each parameter will have some effect on EC variability. They also emphasize the importance of gathering multiple samples from a site to obtain a representative value for the area. (letter)

  3. Measurement and control systems for an imaging electromagnetic flow metre.

    Science.gov (United States)

    Zhao, Y Y; Lucas, G; Leeungculsatien, T

    2014-03-01

    Electromagnetic flow metres based on the principles of Faraday's laws of induction have been used successfully in many industries. The conventional electromagnetic flow metre can measure the mean liquid velocity in axisymmetric single phase flows. However, in order to achieve velocity profile measurements in single phase flows with non-uniform velocity profiles, a novel imaging electromagnetic flow metre (IEF) has been developed which is described in this paper. The novel electromagnetic flow metre which is based on the 'weight value' theory to reconstruct velocity profiles is interfaced with a 'Microrobotics VM1' microcontroller as a stand-alone unit. The work undertaken in the paper demonstrates that an imaging electromagnetic flow metre for liquid velocity profile measurement is an instrument that is highly suited for control via a microcontroller. © 2013 ISA Published by ISA All rights reserved.

  4. Performance and scaling of locally-structured grid methods forpartial differential equations

    Energy Technology Data Exchange (ETDEWEB)

    Colella, Phillip; Bell, John; Keen, Noel; Ligocki, Terry; Lijewski, Michael; Van Straalen, Brian

    2007-07-19

    In this paper, we discuss some of the issues in obtaining high performance for block-structured adaptive mesh refinement software for partial differential equations. We show examples in which AMR scales to thousands of processors. We also discuss a number of metrics for performance and scalability that can provide a basis for understanding the advantages and disadvantages of this approach.

  5. Experimental study on hard X-rays emitted from metre-scale negative discharges in air

    NARCIS (Netherlands)

    P.O. Kochkin (Pavlo); A. van Deursen (Arie); U. M. Ebert (Ute)

    2015-01-01

    htmlabstractWe investigate the development of metre long negative discharges and focus on their x-ray emissions. We describe appearance, timing and spatial distribution of the x-rays. They appear in bursts of nanosecond duration mostly in the cathode area. The spectrum can be characterized by an

  6. Melodic cues for metre

    NARCIS (Netherlands)

    Vos, P G; van Dijk, A; Schomaker, Lambertus

    1994-01-01

    A method of time-series analysis and a time-beating experiment were used to test the structural and perceptual validity of notated metre. Autocorrelation applied to the flow of melodic intervals between notes from thirty fragments of compositions for solo instruments by J S Bach strongly supported

  7. Controllable thousand-port low-latency optical packet switch architecture for short link applications

    NARCIS (Netherlands)

    Di Lucente, S.; Nazarathy, J.; Raz, O.; Calabretta, N.; Dorren, H.J.S.; Bienstman, P.; Morthier, G.; Roelkens, G.; et al., xx

    2011-01-01

    The implementation of a low-latency optical packet switch architecture that is controllable while scaling to over thousand ports is investigated in this paper. Optical packet switches with thousand of input/output ports are promising devices to improve the performance of short link applications in

  8. Scale dependence of rock friction at high work rate.

    Science.gov (United States)

    Yamashita, Futoshi; Fukuyama, Eiichi; Mizoguchi, Kazuo; Takizawa, Shigeru; Xu, Shiqing; Kawakata, Hironori

    2015-12-10

    Determination of the frictional properties of rocks is crucial for an understanding of earthquake mechanics, because most earthquakes are caused by frictional sliding along faults. Prior studies using rotary shear apparatus revealed a marked decrease in frictional strength, which can cause a large stress drop and strong shaking, with increasing slip rate and increasing work rate. (The mechanical work rate per unit area equals the product of the shear stress and the slip rate.) However, those important findings were obtained in experiments using rock specimens with dimensions of only several centimetres, which are much smaller than the dimensions of a natural fault (of the order of 1,000 metres). Here we use a large-scale biaxial friction apparatus with metre-sized rock specimens to investigate scale-dependent rock friction. The experiments show that rock friction in metre-sized rock specimens starts to decrease at a work rate that is one order of magnitude smaller than that in centimetre-sized rock specimens. Mechanical, visual and material observations suggest that slip-evolved stress heterogeneity on the fault accounts for the difference. On the basis of these observations, we propose that stress-concentrated areas exist in which frictional slip produces more wear materials (gouge) than in areas outside, resulting in further stress concentrations at these areas. Shear stress on the fault is primarily sustained by stress-concentrated areas that undergo a high work rate, so those areas should weaken rapidly and cause the macroscopic frictional strength to decrease abruptly. To verify this idea, we conducted numerical simulations assuming that local friction follows the frictional properties observed on centimetre-sized rock specimens. The simulations reproduced the macroscopic frictional properties observed on the metre-sized rock specimens. Given that localized stress concentrations commonly occur naturally, our results suggest that a natural fault may lose its

  9. Metrópoles desgovernadas

    Directory of Open Access Journals (Sweden)

    Erminia Maricato

    2011-04-01

    Full Text Available Apesar de sua importância econômica, política, social, demográfica, cultural, territorial e ambiental, há, nas metrópoles brasileiras, uma significativa falta de governo, evidenciada pelas incipientes iniciativas de cooperação administrativa intermunicipal e federativa. Este artigo aborda as mudanças estruturais - no processo de urbanização/ metropolização - devidas à reestruturação produtiva do capitalismo global, e, na escala nacional, trata da mudança no marco institucional - jurídico/político - que passou de concentrador e centralizador, durante o regime militar, para descentralizador e esvaziado, após a Constituição de 1988. O recuo verificado nas políticas sociais durante os anos 1980 e 1990, notadamente em transporte, habitação e saneamento, além do desmonte dos organismos metropolitanos, conduziu nossas metrópoles a um destino de banalização das tragédias urbanas. Em que pese sua urgência, a questão metropolitana não sensibiliza nenhuma força política ou instituição que lhe atribua lugar de destaque na agenda nacional.Despite its economic, political, social, demographic, cultural, territorial and environmental importance, there is a significant lack of government in the brazilian metropolises, evidenced by the incipient initiatives of intermunicipal and federative administrative cooperation. This article analyses the structural changes - in the process of urbanization/metropolization - due to the productive restructuring of global capitalism, and, in a national scale, analyses the change in the institutional mark - legal/political - which passed from concentrator and centralizer, during the Military Regimen, to decentralized and emptied, after 1988 Constitution. The downturn verified in social policies during the years 1980 and 1990, notably in transport, housing and sanitation, besides the dismantling of the metropolitan agencies, has led our cities to the trivialization of urban tragedies. Despite

  10. PetaScale calculations of the electronic structures of nanostructures with hundreds of thousands of processors

    International Nuclear Information System (INIS)

    Wang, Lin-Wang; Zhao, Zhengji; Meza, Juan

    2006-01-01

    Density functional theory (DFT) is the most widely used ab initio method in material simulations. It accounts for 75% of the NERSC allocation time in the material science category. The DFT can be used to calculate the electronic structure, the charge density, the total energy and the atomic forces of a material system. With the advance of the HPC power and new algorithms, DFT can now be used to study thousand atom systems in some limited ways (e.g, a single selfconsistent calculation without atomic relaxation). But there are many problems which either requires much larger systems (e.g, >100,000 atoms), or many total energy calculation steps (e.g. for molecular dynamics or atomic relaxations). Examples include: grain boundary, dislocation energies and atomic structures, impurity transport and clustering in semiconductors, nanostructure growth, electronic structures of nanostructures and their internal electric fields. Due to the O(N 3 ) scaling of the conventional DFT algorithms (as implemented in codes like Qbox, Paratec, Petots), these problems are beyond the reach even for petascale computers. As the proposed petascale computers might have millions of processors, new computational paradigms and algorithms are needed to solve the above large scale problems. In particular, O(N) scaling algorithms with parallelization capability up to millions of processors are needed. For a large material science problem, a natural approach to achieve this goal is by divide-and-conquer method: to spatially divide the system into many small pieces, and solve each piece by a small local group of processors. This solves the O(N) scaling and the parallelization problem at the same time. However, the challenge of this approach is for how to divide the system into small pieces and how to patch them up without the trace of the spatial division. Here, we present a linear scaling 3 dimensional fragment (LS3DF) method which uses a novel division-patching scheme that cancels out the

  11. Durable Glass For Thousands Of Years

    International Nuclear Information System (INIS)

    Jantzen, C.

    2009-01-01

    The durability of natural glasses on geological time scales and ancient glasses for thousands of years is well documented. The necessity to predict the durability of high level nuclear waste (HLW) glasses on extended time scales has led to various thermodynamic and kinetic approaches. Advances in the measurement of medium range order (MRO) in glasses has led to the understanding that the molecular structure of a glass, and thus the glass composition, controls the glass durability by establishing the distribution of ion exchange sites, hydrolysis sites, and the access of water to those sites. During the early stages of glass dissolution, a 'gel' layer resembling a membrane forms through which ions exchange between the glass and the leachant. The hydrated gel layer exhibits acid/base properties which are manifested as the pH dependence of the thickness and nature of the gel layer. The gel layer ages into clay or zeolite minerals by Ostwald ripening. Zeolite mineral assemblages (higher pH and Al 3+ rich glasses) may cause the dissolution rate to increase which is undesirable for long-term performance of glass in the environment. Thermodynamic and structural approaches to the prediction of glass durability are compared versus Ostwald ripening.

  12. DURABLE GLASS FOR THOUSANDS OF YEARS

    Energy Technology Data Exchange (ETDEWEB)

    Jantzen, C.

    2009-12-04

    The durability of natural glasses on geological time scales and ancient glasses for thousands of years is well documented. The necessity to predict the durability of high level nuclear waste (HLW) glasses on extended time scales has led to various thermodynamic and kinetic approaches. Advances in the measurement of medium range order (MRO) in glasses has led to the understanding that the molecular structure of a glass, and thus the glass composition, controls the glass durability by establishing the distribution of ion exchange sites, hydrolysis sites, and the access of water to those sites. During the early stages of glass dissolution, a 'gel' layer resembling a membrane forms through which ions exchange between the glass and the leachant. The hydrated gel layer exhibits acid/base properties which are manifested as the pH dependence of the thickness and nature of the gel layer. The gel layer ages into clay or zeolite minerals by Ostwald ripening. Zeolite mineral assemblages (higher pH and Al{sup 3+} rich glasses) may cause the dissolution rate to increase which is undesirable for long-term performance of glass in the environment. Thermodynamic and structural approaches to the prediction of glass durability are compared versus Ostwald ripening.

  13. Thousand Questions

    DEFF Research Database (Denmark)

    2012-01-01

    (perhaps as an expanded Turing test) on its listeners. These questions are extracted in real-time from Twitter with the keyword search of the ‘?’ symbol to create a spatio-temporal experience. The computerized voice the audience hears is a collective one, an entanglement of humans and non-humans......In this work the network asks “If I wrote you a love letter would you write back?” Like the love letters which appeared mysteriously on the noticeboards of Manchester University’s Computer Department in the 1950s, thousands of texts circulate as computational processes perform the questions......, that circulates across networks. If I wrote you a love letter would you write back? (and thousands of other questions’ ) (封不回的情書?千言萬語無人回 was commissioned by the Microwave International New Media Festival 2012....

  14. Discovery Mondays: 'The civil engineering genius of the 100-metre deep underground caverns'

    CERN Multimedia

    2004-01-01

    CERN is first and foremost a place where physicists study particle collisions. But to be able to observe the infinitely small, they need huge pieces of equipment, the accelerators and detectors, whose construction, some 100 metres below the earth's surface calls on the services of other fascinating disciplines. Take civil engineering, for example. For the construction of the LHC some 420 000 cubic metres of rock have had to be excavated for the 6500 metres of tunnel, 6 new shafts and 32 underground chambers and caverns. To avoid disrupting other experiments in progress, the work on these exceptional structures has had to be done without creating vibrations. The ATLAS experiment hall, a huge cathedral-like structure 100 metres below ground, is another mind-blowing feat of civil engineering. Its construction involved the use of ground-breaking technology, such as the system for suspending the ceiling put in place during the excavation work. At the next Discovery Monday, the specialists responsible for...

  15. Thousand Islands River : study of solutions to address critically low water levels : summary report; Riviere des Milles Iles : etude des solutions de soutien des etiages critiques : rapport sommaire

    Energy Technology Data Exchange (ETDEWEB)

    Cyr, J F; Fontin, M [Centre d' Expertise Hydrique du Quebec, Quebec, PQ (Canada). Service de la Securite des Barrages

    2005-07-01

    A study was conducted to find solutions to very low water flow in the Thousand Islands River, near the Island of Montreal, Quebec. It was launched in response to the critically low water levels that were experienced in 2001 and 2002. In the summer of 2001, municipalities served by the Thousand Island River were faced with problems in the supply of drinking water when the river flow reached approximately 13 cubic metres per second. Since 1970, the minimal values of flow observed for this time of year had seldom passed under 20 cubic metres per second. Under such conditions of flow, the dilution of any water discharged into the river became so weak that the aging water treatment facilities had to work beyond their capacity. In addition, the population served by this river has increased significantly in the past 2 decades. During this episode of critically low water levels, Quebec's Center of Water Expertise (CEHQ)intervened in an emergency measure to drain flows from a water reservoir in the catchment area to ensure a minimal flow of approximately 25 cubic metres per second. Thereafter, the affected municipalities had to ask the Quebec Environment Ministry to define permanent interventions to ensure a minimal flow the river in the event of similar episodes. The CEHQ carried out a preliminary study of possible solutions to address the critically low water levels and presented its report in the spring 2002. In the winter of 2002, CEHQ created an emergency management procedure in preparation of a repeat episode. 28 refs., 11 figs.

  16. Thousand Islands River : study of solutions to address critically low water levels : summary report; Riviere des Milles Iles : etude des solutions de soutien des etiages critiques : rapport sommaire

    Energy Technology Data Exchange (ETDEWEB)

    Cyr, J.F.; Fontin, M. [Centre d' Expertise Hydrique du Quebec, Quebec, PQ (Canada). Service de la Securite des Barrages

    2005-07-01

    A study was conducted to find solutions to very low water flow in the Thousand Islands River, near the Island of Montreal, Quebec. It was launched in response to the critically low water levels that were experienced in 2001 and 2002. In the summer of 2001, municipalities served by the Thousand Island River were faced with problems in the supply of drinking water when the river flow reached approximately 13 cubic metres per second. Since 1970, the minimal values of flow observed for this time of year had seldom passed under 20 cubic metres per second. Under such conditions of flow, the dilution of any water discharged into the river became so weak that the aging water treatment facilities had to work beyond their capacity. In addition, the population served by this river has increased significantly in the past 2 decades. During this episode of critically low water levels, Quebec's Center of Water Expertise (CEHQ)intervened in an emergency measure to drain flows from a water reservoir in the catchment area to ensure a minimal flow of approximately 25 cubic metres per second. Thereafter, the affected municipalities had to ask the Quebec Environment Ministry to define permanent interventions to ensure a minimal flow the river in the event of similar episodes. The CEHQ carried out a preliminary study of possible solutions to address the critically low water levels and presented its report in the spring 2002. In the winter of 2002, CEHQ created an emergency management procedure in preparation of a repeat episode. 28 refs., 11 figs.

  17. MetR and CRP bind to the Vibrio harveyi lux promoters and regulate luminescence.

    Science.gov (United States)

    Chatterjee, Jaidip; Miyamoto, Carol M; Zouzoulas, Athina; Lang, B Franz; Skouris, Nicolas; Meighen, Edward A

    2002-10-01

    The induction of luminescence in Vibrio harveyi at the later stages of growth is controlled by a quorum-sensing mechanism in addition to nutritional signals. However, the mechanism of transmission of these signals directly to the lux promoters is unknown and only one regulatory protein, LuxR, has been shown to bind directly to lux promoter DNA. In this report, we have cloned and sequenced two genes, crp and metR, coding for the nutritional regulators, CRP (cAMP receptor protein) and MetR (a LysR homologue), involved in catabolite repression and methionine biosynthesis respectively. The metR gene was cloned based on a general strategy to detect lux DNA-binding proteins expressed from a genomic library, whereas the crp gene was cloned based on its complementation of an Escherichia coli crp mutant. Both CRP and MetR were shown to bind to lux promoter DNA, with CRP being dependent on the presence of cAMP. Expression studies indicated that the two regulators had opposite effects on luminescence: CRP was an activator and MetR a repressor. Disruption of crp decreased luminescence by about 1,000-fold showing that CRP is a major activator of luminescence the same as LuxR, whereas disruption of MetR resulted in activation of luminescence over 10-fold, confirming its function as a repressor. Comparison of the levels of the autoinducers involved in quorum sensing excreted by V. harveyi, and the crp and metR mutants, showed that autoinducer production was not significantly different, thus indicating that the nutritional signals do not affect luminescence by changing the levels of the signals required for quorum sensing. Indeed, the large effects of these nutritional sensors show that luminescence is controlled by multiple signals related to the environment and the cell density which must be integrated at the molecular level to control expression at the lux promoters.

  18. Large scale exact quantum dynamics calculations: Ten thousand quantum states of acetonitrile

    Science.gov (United States)

    Halverson, Thomas; Poirier, Bill

    2015-03-01

    'Exact' quantum dynamics (EQD) calculations of the vibrational spectrum of acetonitrile (CH3CN) are performed, using two different methods: (1) phase-space-truncated momentum-symmetrized Gaussian basis and (2) correlated truncated harmonic oscillator basis. In both cases, a simple classical phase space picture is used to optimize the selection of individual basis functions-leading to drastic reductions in basis size, in comparison with existing methods. Massive parallelization is also employed. Together, these tools-implemented into a single, easy-to-use computer code-enable a calculation of tens of thousands of vibrational states of CH3CN to an accuracy of 0.001-10 cm-1.

  19. A spike sorting toolbox for up to thousands of electrodes validated with ground truth recordings in vitro and in vivo

    Science.gov (United States)

    Lefebvre, Baptiste; Deny, Stéphane; Gardella, Christophe; Stimberg, Marcel; Jetter, Florian; Zeck, Guenther; Picaud, Serge; Duebel, Jens

    2018-01-01

    In recent years, multielectrode arrays and large silicon probes have been developed to record simultaneously between hundreds and thousands of electrodes packed with a high density. However, they require novel methods to extract the spiking activity of large ensembles of neurons. Here, we developed a new toolbox to sort spikes from these large-scale extracellular data. To validate our method, we performed simultaneous extracellular and loose patch recordings in rodents to obtain ‘ground truth’ data, where the solution to this sorting problem is known for one cell. The performance of our algorithm was always close to the best expected performance, over a broad range of signal-to-noise ratios, in vitro and in vivo. The algorithm is entirely parallelized and has been successfully tested on recordings with up to 4225 electrodes. Our toolbox thus offers a generic solution to sort accurately spikes for up to thousands of electrodes. PMID:29557782

  20. The metre-kilogram-second system of electrical units

    CERN Document Server

    Sas, R K

    1947-01-01

    Introduction ; electrostatic units, electromagnetic units, and practical units ; magnetic intensity and flux density ; rationalization ; tribulations of the student ; metres and kilograms in general and in mechanics ; pulse and aperture ; magnetostatics ; steady currents ; electrostatics ; resistance ; electromagnetic induction ; determination of Eo. capacity formulae ; field ; electrons and moving charges ; quantum theory ; memory assisted by the M.K.S. system ; short account of M.K.S. units ; list of formulae

  1. Development of a high performance eigensolver on the peta-scale next generation supercomputer system

    International Nuclear Information System (INIS)

    Imamura, Toshiyuki; Yamada, Susumu; Machida, Masahiko

    2010-01-01

    For the present supercomputer systems, a multicore and multisocket processors are necessary to build a system, and choice of interconnection is essential. In addition, for effective development of a new code, high performance, scalable, and reliable numerical software is one of the key items. ScaLAPACK and PETSc are well-known software on distributed memory parallel computer systems. It is needless to say that highly tuned software towards new architecture like many-core processors must be chosen for real computation. In this study, we present a high-performance and high-scalable eigenvalue solver towards the next-generation supercomputer system, so called 'K-computer' system. We have developed two versions, the standard version (eigen s) and enhanced performance version (eigen sx), which are developed on the T2K cluster system housed at University of Tokyo. Eigen s employs the conventional algorithms; Householder tridiagonalization, divide and conquer (DC) algorithm, and Householder back-transformation. They are carefully implemented with blocking technique and flexible two-dimensional data-distribution to reduce the overhead of memory traffic and data transfer, respectively. Eigen s performs excellently on the T2K system with 4096 cores (theoretical peak is 37.6 TFLOPS), and it shows fine performance 3.0 TFLOPS with a two hundred thousand dimensional matrix. The enhanced version, eigen sx, uses more advanced algorithms; the narrow-band reduction algorithm, DC for band matrices, and the block Householder back-transformation with WY-representation. Even though this version is still on a test stage, it shows 4.7 TFLOPS with the same dimensional matrix on eigen s. (author)

  2. Discovery Mondays - 'The civil engineering genius of the 100-metre deep underground caverns'

    CERN Multimedia

    2004-01-01

    CERN is first and foremost a place where physicists study particle collisions. But to be able to observe the infinitely small, they need huge pieces of equipment, the accelerators and detectors, whose construction, some 100 metres below the earth's surface, calls on the services of other fascinating disciplines. Take civil engineering, for example. At the next Discovery Monday, come and find out about the machines involved in the large-scale excavation and concreting work. Everyone is welcome at Microcosm, which will be specially transformed into a worksite for the occasion! Come along to Microcosm (Reception Building 33, Meyrin site) on Monday 6 September from 7:30 p.m. to 9:00 p.m. Entrance Free http://www.cern.ch/microcosm

  3. Direct Mutagenesis of Thousands of Genomic Targets using Microarray-derived Oligonucleotides

    DEFF Research Database (Denmark)

    Bonde, Mads; Kosuri, Sriram; Genee, Hans Jasper

    2015-01-01

    Multiplex Automated Genome Engineering (MAGE) allows simultaneous mutagenesis of multiple target sites in bacterial genomes using short oligonucleotides. However, large-scale mutagenesis requires hundreds to thousands of unique oligos, which are costly to synthesize and impossible to scale-up by ...

  4. Metrology at the nano scale

    International Nuclear Information System (INIS)

    Sheridan, B.; Cumpson, P.; Bailey, M.

    2006-01-01

    Progress in nano technology relies on ever more accurate measurements of quantities such as distance, force and current industry has long depended on accurate measurement. In the 19th century, for example, the performance of steam engines was seriously limited by inaccurately made components, a situation that was transformed by Henry Maudsley's screw micrometer calliper. And early in the 20th century, the development of telegraphy relied on improved standards of electrical resistance. Before this, each country had its own standards and cross border communication was difficult. The same is true today of nano technology if it is to be fully exploited by industry. Principles of measurement that work well at the macroscopic level often become completely unworkable at the nano metre scale - about 100 nm and below. Imaging, for example, is not possible on this scale using optical microscopes, and it is virtually impossible to weigh a nano metre-scale object with any accuracy. In addition to needing more accurate measurements, nano technology also often requires a greater variety of measurements than conventional technology. For example, standard techniques used to make microchips generally need accurate length measurements, but the manufacture of electronics at the molecular scale requires magnetic, electrical, mechanical and chemical measurements as well. (U.K.)

  5. Retratos da metrópole parisiense

    OpenAIRE

    Saint-Julien, Thérèse; Goix, Renaud Le

    2009-01-01

    « La métropole parisienne, centralité, inégalité, proximité » propõe uma leitura e uma interpretação das tendências do território da Île de France (a região que inclui Paris e sete outros départements), ou seja, uma grande metrópole mundial de cerca de 11,3 milhões de habitantes em 2004, ligada às redes da globalização e da metropolização. O livro esboça os traços principais de suas estruturas territoriais emergentes, sublinha os desafios, o alcance dos mesmos e as contradições. Sem pretensão...

  6. Retratos da metrópole parisiense

    Directory of Open Access Journals (Sweden)

    Thérèse Saint-Julien

    2009-07-01

    Full Text Available « La métropole parisienne, centralité, inégalité, proximité » propõe uma leitura e uma interpretação das tendências do território da Île de France (a região que inclui Paris e sete outros départements, ou seja, uma grande metrópole mundial de cerca de 11,3 milhões de habitantes em 2004, ligada às redes da globalização e da metropolização. O livro esboça os traços principais de suas estruturas territoriais emergentes, sublinha os desafios, o alcance dos mesmos e as contradições. Sem pretensão...

  7. Multi-scale high-performance fluid flow: Simulations through porous media

    KAUST Repository

    Perović, Nevena

    2016-08-03

    Computational fluid dynamic (CFD) calculations on geometrically complex domains such as porous media require high geometric discretisation for accurately capturing the tested physical phenomena. Moreover, when considering a large area and analysing local effects, it is necessary to deploy a multi-scale approach that is both memory-intensive and time-consuming. Hence, this type of analysis must be conducted on a high-performance parallel computing infrastructure. In this paper, the coupling of two different scales based on the Navier–Stokes equations and Darcy\\'s law is described followed by the generation of complex geometries, and their discretisation and numerical treatment. Subsequently, the necessary parallelisation techniques and a rather specific tool, which is capable of retrieving data from the supercomputing servers and visualising them during the computation runtime (i.e. in situ) are described. All advantages and possible drawbacks of this approach, together with the preliminary results and sensitivity analyses are discussed in detail.

  8. Multi-scale high-performance fluid flow: Simulations through porous media

    KAUST Repository

    Perović, Nevena; Frisch, Jé rô me; Salama, Amgad; Sun, Shuyu; Rank, Ernst; Mundani, Ralf Peter

    2016-01-01

    Computational fluid dynamic (CFD) calculations on geometrically complex domains such as porous media require high geometric discretisation for accurately capturing the tested physical phenomena. Moreover, when considering a large area and analysing local effects, it is necessary to deploy a multi-scale approach that is both memory-intensive and time-consuming. Hence, this type of analysis must be conducted on a high-performance parallel computing infrastructure. In this paper, the coupling of two different scales based on the Navier–Stokes equations and Darcy's law is described followed by the generation of complex geometries, and their discretisation and numerical treatment. Subsequently, the necessary parallelisation techniques and a rather specific tool, which is capable of retrieving data from the supercomputing servers and visualising them during the computation runtime (i.e. in situ) are described. All advantages and possible drawbacks of this approach, together with the preliminary results and sensitivity analyses are discussed in detail.

  9. The envelope of the power spectra of over a thousand δ Scuti stars. The T̅eff - νmax scaling relation

    Science.gov (United States)

    Barceló Forteza, S.; Roca Cortés, T.; García, R. A.

    2018-06-01

    CoRoT and Kepler high-precision photometric data allowed the detection and characterization of the oscillation parameters in stars other than the Sun. Moreover, thanks to the scaling relations, it is possible to estimate masses and radii for thousands of solar-type oscillating stars. Recently, a Δν - ρ relation has been found for δ Scuti stars. Now, analysing several hundreds of this kind of stars observed with CoRoT and Kepler, we present an empiric relation between their frequency at maximum power of their oscillation spectra and their effective temperature. Such a relation can be explained with the help of the κ-mechanism and the observed dispersion of the residuals is compatible with they being caused by the gravity-darkening effect. Table A.1 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/614/A46

  10. A antropologia urbana e os desafios da metrópole

    Directory of Open Access Journals (Sweden)

    José Guilherme Cantor Magnani

    2003-04-01

    Full Text Available O texto analisa a situação da disciplina antropologia urbana no campo das ciências sociais e sua contribuição para o estudo e a compreensão do fenômeno urbano, principalmente no caso das grandes metrópoles contemporâneas. O eixo da argumentação é o de que, para realizar essa tarefa, a antropologia urbana tem à sua disposição o método etnográfico, porém o desafio é aplicar essa abordagem sem cair na "tentação da aldeia", isto é, a de buscar na heterogênea realidade das grandes cidades as condições da aldeia - pequenos grupos, contextos limitados - supostamente identificadas com o enfoque etnográfico. Vários exemplos de pesquisas recentes sobre a cidade de São Paulo, realizados no Núcleo de Antropologia Urbana (NAU e no Departamento de Antropologia da USP são apresentados para mostrar as potencialidades da aplicação de conceitos, técnicas e métodos desenvolvidos na antropologia e, em particular, na antropologia urbana, para o estudo de formas de sociabilidade e práticas culturais na escala da metrópole.State of the art of urban anthropology as a subject in the field of social sciences and its contribution to the study and understanding of the urban phenomenon, mainly in the case of great contemporary metropolises. Urban anthropology has at its disposal the ethnographic method but the challenge is to apply this approach without falling into the ' village temptation', i.e., that of looking at the heterogeneous reality of the big cities for the village conditions - small groups, limited contexts - which are supposedly identified by the ethnographic approach. Various examples of recent researches on the city of São Paulo, undertaken in the Urban Anthropology Nucleus (NAU and in the Department of Anthropology of USP, are presented to show the potentiality of the application of concepts, techniques and methods developed in anthropology, and in particular, in urban anthropology, for the study of forms of sociability

  11. Finding the Density of a Liquid Using a Metre Rule

    Science.gov (United States)

    Chattopadhyay, K. N.

    2008-01-01

    A simple method, which is based on the principle of moment of forces only, is described for the determination of the density of liquids without measuring the mass and volume. At first, an empty test tube and a solid substance, which are hung on each side of a metre rule, are balanced and the moment arm of the test tube is measured. Keeping the…

  12. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    Science.gov (United States)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated

  13. Non-metric multidimensional performance indicator scaling reveals seasonal and team dissimilarity within the National Rugby League.

    Science.gov (United States)

    Woods, Carl T; Robertson, Sam; Sinclair, Wade H; Collier, Neil French

    2018-04-01

    Analysing the dissimilarity of seasonal and team profiles within elite sport may reveal the evolutionary dynamics of game-play, while highlighting the similarity of individual team profiles. This study analysed seasonal and team dissimilarity within the National Rugby League (NRL) between the 2005 to 2016 seasons. Longitudinal. Total seasonal values for 15 performance indicators were collected for every NRL team over the analysed period (n=190 observations). Non-metric multidimensional scaling was used to reveal seasonal and team dissimilarity. Compared to the 2005 to 2011 seasons, the 2012 to 2016 seasons were in a state of flux, with a relative dissimilarity in the positioning of team profiles on the ordination surface. There was an abrupt change in performance indicator characteristics following the 2012 season, with the 2014 season reflecting a large increase in the total count of 'all run metres' (d=1.21; 90% CI=0.56-1.83), 'kick return metres' (d=2.99; 90% CI=2.12-3.84) and decrease in 'missed tackles' (d=-2.43; 90% CI=-3.19 to -1.64) and 'tackle breaks' (d=-2.41; 90% CI=-3.17 to -1.62). Interpretation of team ordination plots showed that certain teams evolved in (dis)similar ways over the analysed period. It appears that NRL match-types evolved following the 2012 season and are in a current state of flux. The modification of coaching tactics and rule changes may have contributed to these observations. Coaches could use these results when designing prospective game strategies in the NRL. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  14. Performance of some biotic indices in the real variable world: A case study at different spatial scales in North-Western Mediterranean Sea

    International Nuclear Information System (INIS)

    Tataranni, Mariella; Lardicci, Claudio

    2010-01-01

    The aim of this study was to analyse the variability of four different benthic biotic indices (AMBI, BENTIX, H', M-AMBI) in two marine coastal areas of the North-Western Mediterranean Sea. In each coastal area, 36 replicates were randomly selected according to a hierarchical sampling design, which allowed estimating the variance components of the indices associated with four different spatial scales (ranging from metres to kilometres). All the analyses were performed at two different sampling periods in order to evaluate if the observed trends were consistent over the time. The variance components of the four indices revealed complex trends and different patterns in the two sampling periods. These results highlighted that independently from the employed index, a rigorous and appropriate sampling design taking into account different scales should always be used in order to avoid erroneous classifications and to develop effective monitoring programs. - How heterogeneous distribution of macrobenthos can affect the performance of some biotic indices.

  15. Validity of 20-metre multi stage shuttle run test for estimation of ...

    African Journals Online (AJOL)

    Validity of 20-metre multi stage shuttle run test for estimation of maximum oxygen uptake in indian male university students. P Chatterjee, AK Banerjee, P Debnath, P Bas, B Chatterjee. Abstract. No Abstract. South African Journal for Physical, Health Education, Recreation and DanceVol. 12(4) 2006: pp. 461-467. Full Text:.

  16. Powder metallurgical high performance materials. Proceedings. Volume 1: high performance P/M metals

    International Nuclear Information System (INIS)

    Kneringer, G.; Roedhammer, P.; Wildner, H.

    2001-01-01

    The proceedings of this sequence of seminars form an impressive chronicle of the continued progress in the understanding of refractory metals and cemented carbides and in their manufacture and application. There the ingenuity and assiduous work of thousands of scientists and engineers striving for progress in the field of powder metallurgy is documented in more than 2000 contributions covering some 30000 pages. The 15th Plansee Seminar was convened under the general theme 'Powder Metallurgical High Performance Materials'. Under this broadened perspective the seminar will strive to look beyond the refractory metals and cemented carbides, which remain at its focus, to novel classes of materials, such as intermetallic compounds, with potential for high temperature applications. (author)

  17. Powder metallurgical high performance materials. Proceedings. Volume 1: high performance P/M metals

    Energy Technology Data Exchange (ETDEWEB)

    Kneringer, G; Roedhammer, P; Wildner, H [eds.

    2001-07-01

    The proceedings of this sequence of seminars form an impressive chronicle of the continued progress in the understanding of refractory metals and cemented carbides and in their manufacture and application. There the ingenuity and assiduous work of thousands of scientists and engineers striving for progress in the field of powder metallurgy is documented in more than 2000 contributions covering some 30000 pages. The 15th Plansee Seminar was convened under the general theme 'Powder Metallurgical High Performance Materials'. Under this broadened perspective the seminar will strive to look beyond the refractory metals and cemented carbides, which remain at its focus, to novel classes of materials, such as intermetallic compounds, with potential for high temperature applications. (author)

  18. SWAP-Assembler: scalable and efficient genome assembly towards thousands of cores.

    Science.gov (United States)

    Meng, Jintao; Wang, Bingqiang; Wei, Yanjie; Feng, Shengzhong; Balaji, Pavan

    2014-01-01

    There is a widening gap between the throughput of massive parallel sequencing machines and the ability to analyze these sequencing data. Traditional assembly methods requiring long execution time and large amount of memory on a single workstation limit their use on these massive data. This paper presents a highly scalable assembler named as SWAP-Assembler for processing massive sequencing data using thousands of cores, where SWAP is an acronym for Small World Asynchronous Parallel model. In the paper, a mathematical description of multi-step bi-directed graph (MSG) is provided to resolve the computational interdependence on merging edges, and a highly scalable computational framework for SWAP is developed to automatically preform the parallel computation of all operations. Graph cleaning and contig extension are also included for generating contigs with high quality. Experimental results show that SWAP-Assembler scales up to 2048 cores on Yanhuang dataset using only 26 minutes, which is better than several other parallel assemblers, such as ABySS, Ray, and PASHA. Results also show that SWAP-Assembler can generate high quality contigs with good N50 size and low error rate, especially it generated the longest N50 contig sizes for Fish and Yanhuang datasets. In this paper, we presented a highly scalable and efficient genome assembly software, SWAP-Assembler. Compared with several other assemblers, it showed very good performance in terms of scalability and contig quality. This software is available at: https://sourceforge.net/projects/swapassembler.

  19. High performance computing and communications: Advancing the frontiers of information technology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental in the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.

  20. Cuerpos-Espacios sonoros: Disonancias en la metrópolis comunicacional

    Directory of Open Access Journals (Sweden)

    Massimo Canevacci

    2008-01-01

    Full Text Available El artículo se concentra en el cambio de la ciudad industrial a la metrópolis comunicacional y en el consecuente cambio de sus habitantes que se adaptan y renuevan a las nuevas condiciones. Este proceso favorece el surgir de un nuevo sujeto (multividuo que cambia su visión del mundo y transforma los escenarios urbanos en lugares fragmentarios y musicales (soundscape. Este cambio es posible sólo gracias a la reapropiación performática de algunos espacios urbanos por parte de artistas y públicos dispuestos a profanar algunos escenarios destinados a fines institucionales o funcionales al desarrollo de la ciudad. En estos espacios el soundscape y el bodyscape interactúan desarrollando nuevos significados y nuevos sentidos en política. En Roma diferentes lugares han sido usadas en años recientes para poner en escena estas performances que se conectan con el cambio urbano y la consecuente modificación de la experiencia sensorial. Tales espacios son utilizados para poner en escena perfomances que resignifican contextos urbanos. En el ensayo se analizarán tres eventos romanos: Disonanze del 2006, La notte bianca del 2006 y la presentación del grupo finlandés Pan Sonic, escogida como una zona de grafitti.

  1. The Convergence of High Performance Computing and Large Scale Data Analytics

    Science.gov (United States)

    Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.

    2015-12-01

    As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.

  2. High-resolution chemical composition of geothermal scalings from Hungary: Preliminary results

    Science.gov (United States)

    Boch, Ronny; Dietzel, Martin; Deák, József; Leis, Albrecht; Mindszenty, Andrea; Demeny, Attila

    2015-04-01

    Geothermal fluids originating from several hundreds to thousands meters depth mostly hold a high potential for secondary mineral precipitation (scaling) due to high total dissolved solid contents at elevated temperature and pressure conditions. The precipitation of e.g. carbonates, sulfates, sulfides, and silica has shown to cause severe problems in geothermal heat and electric power production, when clogging of drill-holes, downhole pumps, pipes and heat exchangers occurs (e.g. deep geothermal doublet systems). Ongoing scaling reduces the efficiency in energy extraction and might even question the abandonment of installations in worst cases. In an attempt to study scaling processes both temporally and spatially we collected mineral precipitates from selected sites in Hungary (Bükfürdo, Szechenyi, Szentes, Igal, Hajduszoboszlo). The samples of up to 8 cm thickness were recovered from different positions of the geothermal systems and precipitated from waters of various temperatures (40-120 °C) and variable overall chemical composition. Most of these scalings show fine lamination patterns representing mineral deposition from weeks up to 45 years at our study sites. Solid-fluid interaction over time captured in the samples are investigated applying high-resolution analytical techniques such as laser-ablation mass-spectrometry and electron microprobe, micromill-sampling for stable isotope analysis, and micro-XRD combined with hydrogeochemical modeling. A detailed investigation of the processes determining the formation and growth of precipitates can help to elucidate the short-term versus long-term geothermal performance with regard to anthropogenic and natural reservoir and production dynamics. Changes in fluid chemistry, temperature, pressure, pH, degassing rate (CO2) and flow rate are reflected by the mineralogical, chemical and isotopic composition of the precipitates. Consequently, this high-resolution approach is intended as a contribution to decipher the

  3. A new asynchronous parallel algorithm for inferring large-scale gene regulatory networks.

    Directory of Open Access Journals (Sweden)

    Xiangyun Xiao

    Full Text Available The reconstruction of gene regulatory networks (GRNs from high-throughput experimental data has been considered one of the most important issues in systems biology research. With the development of high-throughput technology and the complexity of biological problems, we need to reconstruct GRNs that contain thousands of genes. However, when many existing algorithms are used to handle these large-scale problems, they will encounter two important issues: low accuracy and high computational cost. To overcome these difficulties, the main goal of this study is to design an effective parallel algorithm to infer large-scale GRNs based on high-performance parallel computing environments. In this study, we proposed a novel asynchronous parallel framework to improve the accuracy and lower the time complexity of large-scale GRN inference by combining splitting technology and ordinary differential equation (ODE-based optimization. The presented algorithm uses the sparsity and modularity of GRNs to split whole large-scale GRNs into many small-scale modular subnetworks. Through the ODE-based optimization of all subnetworks in parallel and their asynchronous communications, we can easily obtain the parameters of the whole network. To test the performance of the proposed approach, we used well-known benchmark datasets from Dialogue for Reverse Engineering Assessments and Methods challenge (DREAM, experimentally determined GRN of Escherichia coli and one published dataset that contains more than 10 thousand genes to compare the proposed approach with several popular algorithms on the same high-performance computing environments in terms of both accuracy and time complexity. The numerical results demonstrate that our parallel algorithm exhibits obvious superiority in inferring large-scale GRNs.

  4. A new asynchronous parallel algorithm for inferring large-scale gene regulatory networks.

    Science.gov (United States)

    Xiao, Xiangyun; Zhang, Wei; Zou, Xiufen

    2015-01-01

    The reconstruction of gene regulatory networks (GRNs) from high-throughput experimental data has been considered one of the most important issues in systems biology research. With the development of high-throughput technology and the complexity of biological problems, we need to reconstruct GRNs that contain thousands of genes. However, when many existing algorithms are used to handle these large-scale problems, they will encounter two important issues: low accuracy and high computational cost. To overcome these difficulties, the main goal of this study is to design an effective parallel algorithm to infer large-scale GRNs based on high-performance parallel computing environments. In this study, we proposed a novel asynchronous parallel framework to improve the accuracy and lower the time complexity of large-scale GRN inference by combining splitting technology and ordinary differential equation (ODE)-based optimization. The presented algorithm uses the sparsity and modularity of GRNs to split whole large-scale GRNs into many small-scale modular subnetworks. Through the ODE-based optimization of all subnetworks in parallel and their asynchronous communications, we can easily obtain the parameters of the whole network. To test the performance of the proposed approach, we used well-known benchmark datasets from Dialogue for Reverse Engineering Assessments and Methods challenge (DREAM), experimentally determined GRN of Escherichia coli and one published dataset that contains more than 10 thousand genes to compare the proposed approach with several popular algorithms on the same high-performance computing environments in terms of both accuracy and time complexity. The numerical results demonstrate that our parallel algorithm exhibits obvious superiority in inferring large-scale GRNs.

  5. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    Science.gov (United States)

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software

  6. RUMD: A general purpose molecular dynamics package optimized to utilize GPU hardware down to a few thousand particles

    Directory of Open Access Journals (Sweden)

    Nicholas P. Bailey, Trond S. Ingebrigtsen, Jesper Schmidt Hansen, Arno A. Veldhorst, Lasse Bøhling, Claire A. Lemarchand, Andreas E. Olsen, Andreas K. Bacher, Lorenzo Costigliola, Ulf R. Pedersen, Heine Larsen, Jeppe C. Dyre, Thomas B. Schrøder

    2017-12-01

    Full Text Available RUMD is a general purpose, high-performance molecular dynamics (MD simulation package running on graphical processing units (GPU's. RUMD addresses the challenge of utilizing the many-core nature of modern GPU hardware when simulating small to medium system sizes (roughly from a few thousand up to hundred thousand particles. It has a performance that is comparable to other GPU-MD codes at large system sizes and substantially better at smaller sizes.RUMD is open-source and consists of a library written in C++ and the CUDA extension to C, an easy-to-use Python interface, and a set of tools for set-up and post-simulation data analysis. The paper describes RUMD's main features, optimizations and performance benchmarks.

  7. Vertically migrating swimmers generate aggregation-scale eddies in a stratified column.

    Science.gov (United States)

    Houghton, Isabel A; Koseff, Jeffrey R; Monismith, Stephen G; Dabiri, John O

    2018-04-01

    Biologically generated turbulence has been proposed as an important contributor to nutrient transport and ocean mixing 1-3 . However, to produce non-negligible transport and mixing, such turbulence must produce eddies at scales comparable to the length scales of stratification in the ocean. It has previously been argued that biologically generated turbulence is limited to the scale of the individual animals involved 4 , which would make turbulence created by highly abundant centimetre-scale zooplankton such as krill irrelevant to ocean mixing. Their small size notwithstanding, zooplankton form dense aggregations tens of metres in vertical extent as they undergo diurnal vertical migration over hundreds of metres 3,5,6 . This behaviour potentially introduces additional length scales-such as the scale of the aggregation-that are of relevance to animal interactions with the surrounding water column. Here we show that the collective vertical migration of centimetre-scale swimmers-as represented by the brine shrimp Artemia salina-generates aggregation-scale eddies that mix a stable density stratification, resulting in an effective turbulent diffusivity up to three orders of magnitude larger than the molecular diffusivity of salt. These observed large-scale mixing eddies are the result of flow in the wakes of the individual organisms coalescing to form a large-scale downward jet during upward swimming, even in the presence of a strong density stratification relative to typical values observed in the ocean. The results illustrate the potential for marine zooplankton to considerably alter the physical and biogeochemical structure of the water column, with potentially widespread effects owing to their high abundance in climatically important regions of the ocean 7 .

  8. The influence of the scale effect and high temperatures on the strength and strains of high performance concrete

    Directory of Open Access Journals (Sweden)

    Korsun Vladimyr Ivanovych

    2014-03-01

    Full Text Available The most effective way to reduce the structure mass, labor input and expenses for its construction is to use modern high-performance concrete of the classes С50/60… С90/105, which possess high physical and mathematic characteristics. One of the constraints for their implementation in mass construction in Ukraine is that in design standards there are no experimental data on the physical and mathematic properties of concrete of the classes more than С50/60. Also there are no exact statements on calculating reinforced concrete structures made of high-performance concretes.The authors present the results of experimental research of the scale effect and short-term and long-term heating up to +200 ° C influence on temperature and shrinkage strain, on strength and strain characteristics under compression and tensioning of high-strength modified concrete of class C70/85. The application of high performance concretes is challenging in the process of constructing buildings aimed at operating in high technological temperatures: smoke pipes, coolers, basins, nuclear power plants' protective shells, etc. Reducing cross-sections can lead to reducing temperature drops and thermal stresses in the structures.

  9. Coated Porous Si for High Performance On-Chip Supercapacitors

    Science.gov (United States)

    Grigoras, K.; Keskinen, J.; Grönberg, L.; Ahopelto, J.; Prunnila, M.

    2014-11-01

    High performance porous Si based supercapacitor electrodes are demonstrated. High power density and stability is provided by ultra-thin TiN coating of the porous Si matrix. The TiN layer is deposited by atomic layer deposition (ALD), which provides sufficient conformality to reach the bottom of the high aspect ratio pores. Our porous Si supercapacitor devices exhibit almost ideal double layer capacitor characteristic with electrode volumetric capacitance of 7.3 F/cm3. Several orders of magnitude increase in power and energy density is obtained comparing to uncoated porous silicon electrodes. Good stability of devices is confirmed performing several thousands of charge/discharge cycles.

  10. High-Performance Complementary Transistors and Medium-Scale Integrated Circuits Based on Carbon Nanotube Thin Films.

    Science.gov (United States)

    Yang, Yingjun; Ding, Li; Han, Jie; Zhang, Zhiyong; Peng, Lian-Mao

    2017-04-25

    Solution-derived carbon nanotube (CNT) network films with high semiconducting purity are suitable materials for the wafer-scale fabrication of field-effect transistors (FETs) and integrated circuits (ICs). However, it is challenging to realize high-performance complementary metal-oxide semiconductor (CMOS) FETs with high yield and stability on such CNT network films, and this difficulty hinders the development of CNT-film-based ICs. In this work, we developed a doping-free process for the fabrication of CMOS FETs based on solution-processed CNT network films, in which the polarity of the FETs was controlled using Sc or Pd as the source/drain contacts to selectively inject carriers into the channels. The fabricated top-gated CMOS FETs showed high symmetry between the characteristics of n- and p-type devices and exhibited high-performance uniformity and excellent scalability down to a gate length of 1 μm. Many common types of CMOS ICs, including typical logic gates, sequential circuits, and arithmetic units, were constructed based on CNT films, and the fabricated ICs exhibited rail-to-rail outputs because of the high noise margin of CMOS circuits. In particular, 4-bit full adders consisting of 132 CMOS FETs were realized with 100% yield, thereby demonstrating that this CMOS technology shows the potential to advance the development of medium-scale CNT-network-film-based ICs.

  11. Impact of Data Placement on Resilience in Large-Scale Object Storage Systems

    Energy Technology Data Exchange (ETDEWEB)

    Carns, Philip; Harms, Kevin; Jenkins, John; Mubarak, Misbah; Ross, Robert; Carothers, Christopher

    2016-05-02

    Distributed object storage architectures have become the de facto standard for high-performance storage in big data, cloud, and HPC computing. Object storage deployments using commodity hardware to reduce costs often employ object replication as a method to achieve data resilience. Repairing object replicas after failure is a daunting task for systems with thousands of servers and billions of objects, however, and it is increasingly difficult to evaluate such scenarios at scale on realworld systems. Resilience and availability are both compromised if objects are not repaired in a timely manner. In this work we leverage a high-fidelity discrete-event simulation model to investigate replica reconstruction on large-scale object storage systems with thousands of servers, billions of objects, and petabytes of data. We evaluate the behavior of CRUSH, a well-known object placement algorithm, and identify configuration scenarios in which aggregate rebuild performance is constrained by object placement policies. After determining the root cause of this bottleneck, we then propose enhancements to CRUSH and the usage policies atop it to enable scalable replica reconstruction. We use these methods to demonstrate a simulated aggregate rebuild rate of 410 GiB/s (within 5% of projected ideal linear scaling) on a 1,024-node commodity storage system. We also uncover an unexpected phenomenon in rebuild performance based on the characteristics of the data stored on the system.

  12. Bogotá: metrópoli de conflictos

    Directory of Open Access Journals (Sweden)

    Adriana Patricia López Velázquez

    2009-01-01

    Full Text Available En este ensayo se presentan dos aspectos de Bogotá. Por un lado, la metrópoli que está creciendo y que reporta cifras positivas en la tendencia del periodo analizado, en referencia a indicadores como el de las necesidades básicas insatisfechas (NBI, el índice de desarrollo humano (IDH, y la inversión extranjera directa. Es la Bogotá que disfruta de la exigencia y el acceso a los mercados más dinámicos y en general a la globalización. Por otro lado, la Bogotá de los que no tienen empleo, cuya capacidad de pago es bastante limitada, y viven en el sobresalto de la inseguridad. Este trabajo pretende mostrar –desde una perspectiva holística– que en el mismo espacio existen dos comunidades mutuamente complementarias y funcionales que, dentro del articulado teórico general del proyecto de investigación de Sanabria y López (2008, se conjugan para organizar los elementos comprensivos de la calidad de la vida.

  13. RUMD: A general purpose molecular dynamics package optimized to utilize GPU hardware down to a few thousand particles

    DEFF Research Database (Denmark)

    Bailey, Nicholas; Ingebrigtsen, Trond; Hansen, Jesper Schmidt

    2017-01-01

    RUMD is a general purpose, high-performance molecular dynamics (MD) simulation package running on graphical processing units (GPU’s). RUMD addresses the challenge of utilizing the many-core nature of modern GPU hardware when simulating small to medium system sizes (roughly from a few thousand up...

  14. A high-performance dual-scale porous electrode for vanadium redox flow batteries

    Science.gov (United States)

    Zhou, X. L.; Zeng, Y. K.; Zhu, X. B.; Wei, L.; Zhao, T. S.

    2016-09-01

    In this work, we present a simple and cost-effective method to form a dual-scale porous electrode by KOH activation of the fibers of carbon papers. The large pores (∼10 μm), formed between carbon fibers, serve as the macroscopic pathways for high electrolyte flow rates, while the small pores (∼5 nm), formed on carbon fiber surfaces, act as active sites for rapid electrochemical reactions. It is shown that the Brunauer-Emmett-Teller specific surface area of the carbon paper is increased by a factor of 16 while maintaining the same hydraulic permeability as that of the original carbon paper electrode. We then apply the dual-scale electrode to a vanadium redox flow battery (VRFB) and demonstrate an energy efficiency ranging from 82% to 88% at current densities of 200-400 mA cm-2, which is record breaking as the highest performance of VRFB in the open literature.

  15. High-Throughput Computing on High-Performance Platforms: A Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Matteo, Turilli [Rutgers University; Angius, Alessio [Rutgers University; Oral, H Sarp [ORNL; De, K [University of Texas at Arlington; Klimentov, A [Brookhaven National Laboratory (BNL); Wells, Jack C. [ORNL; Jha, S [Rutgers University

    2017-10-01

    The computing systems used by LHC experiments has historically consisted of the federation of hundreds to thousands of distributed resources, ranging from small to mid-size resource. In spite of the impressive scale of the existing distributed computing solutions, the federation of small to mid-size resources will be insufficient to meet projected future demands. This paper is a case study of how the ATLAS experiment has embraced Titan -- a DOE leadership facility in conjunction with traditional distributed high- throughput computing to reach sustained production scales of approximately 52M core-hours a years. The three main contributions of this paper are: (i) a critical evaluation of design and operational considerations to support the sustained, scalable and production usage of Titan; (ii) a preliminary characterization of a next generation executor for PanDA to support new workloads and advanced execution modes; and (iii) early lessons for how current and future experimental and observational systems can be integrated with production supercomputers and other platforms in a general and extensible manner.

  16. Enhanced annotations and features for comparing thousands of Pseudomonas genomes in the Pseudomonas genome database.

    Science.gov (United States)

    Winsor, Geoffrey L; Griffiths, Emma J; Lo, Raymond; Dhillon, Bhavjinder K; Shay, Julie A; Brinkman, Fiona S L

    2016-01-04

    The Pseudomonas Genome Database (http://www.pseudomonas.com) is well known for the application of community-based annotation approaches for producing a high-quality Pseudomonas aeruginosa PAO1 genome annotation, and facilitating whole-genome comparative analyses with other Pseudomonas strains. To aid analysis of potentially thousands of complete and draft genome assemblies, this database and analysis platform was upgraded to integrate curated genome annotations and isolate metadata with enhanced tools for larger scale comparative analysis and visualization. Manually curated gene annotations are supplemented with improved computational analyses that help identify putative drug targets and vaccine candidates or assist with evolutionary studies by identifying orthologs, pathogen-associated genes and genomic islands. The database schema has been updated to integrate isolate metadata that will facilitate more powerful analysis of genomes across datasets in the future. We continue to place an emphasis on providing high-quality updates to gene annotations through regular review of the scientific literature and using community-based approaches including a major new Pseudomonas community initiative for the assignment of high-quality gene ontology terms to genes. As we further expand from thousands of genomes, we plan to provide enhancements that will aid data visualization and analysis arising from whole-genome comparative studies including more pan-genome and population-based approaches. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  17. Validation of SCALE code package on high performance neutron shields

    International Nuclear Information System (INIS)

    Bace, M.; Jecmenica, R.; Smuc, T.

    1999-01-01

    The shielding ability and other properties of new high performance neutron shielding materials from the KRAFTON series have been recently published. A comparison of the published experimental and MCNP results for the two materials of the KRAFTON series, with our own calculations has been done. Two control modules of the SCALE-4.4 code system have been used, one of them based on one dimensional radiation transport analysis (SAS1) and other based on the three dimensional Monte Carlo method (SAS3). The comparison of the calculated neutron dose equivalent rates shows a good agreement between experimental and calculated results for the KRAFTON-N2 material.. Our results indicate that the N2-M-N2 sandwich type is approximately 10% inferior as neutron shield to the KRAFTON-N2 material. All values of neutron dose equivalent obtained by SAS1 are approximately 25% lower in comparison with the SAS3 results, which indicates proportions of discrepancies introduced by one-dimensional geometry approximation.(author)

  18. Humans and Machines in the subway Humanos e máquinas no metrô

    Directory of Open Access Journals (Sweden)

    Janice Caiafa

    2011-07-01

    Full Text Available In this work we analyze the repercussions that the recent implementation of an electronic fare collection system - with smart cards – has brought to everyday life of subway riders in Rio de Janeiro. The adoption of a fare collection system involves the choice of a technology and the stipulation of a charging system. By following this technological evolution in Rio de Janeiro’s subway, we explore how its riders learn to approach the new interfaces with the automatic machines and to cope with the payment conditions, which are at the same time imposed. We note, in this context, how the technological devices are coupled to human actions regarding the construction of a new fare collection system as well as the sociability that is developed in the subway technological space.Resumo: Neste trabalho analisamos as repercussões que a recente implementação de um novo sistema de bilhetagem eletrônica trouxe para o quotidiano dos usuários do metrô do Rio de Janeiro. A adoção de um sistema de bilhetagem envolve a escolha de uma tecnologia e a estipulação de um regime de cobrança. Ao acompanhar essa evolução tecnológia no metrô do Rio de Janeiro, exploramos como os usuários têm aprendido a abordar as novas interfaces com as máquinas automáticas e enfrentado as condições de pagamento que no mesmo golpe são impostas. Observamos como as ferramentas tecnológicas se acoplam a ações humanas tanto no que diz respeito à própria construção do novo sistema quanto à sociabilidade que se desenvolve no espaço tecnológico do metrô. Palavras-Chave: Tecnologia; Metrô (Rio de Janeiro; Sociabilidade Abstract: In this work we analyze the repercussions of the recently implemented fare collection system with smart cards in the Rio de Janeiro subway. The implementation of a fare collection system involves opting for a certain technology as well as stipulating a regime of fare charging. As we follow this technological evolution in the Rio

  19. Newmark local time stepping on high-performance computing architectures

    KAUST Repository

    Rietmann, Max

    2016-11-25

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100×). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.

  20. Newmark local time stepping on high-performance computing architectures

    KAUST Repository

    Rietmann, Max; Grote, Marcus; Peter, Daniel; Schenk, Olaf

    2016-01-01

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100×). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.

  1. Newmark local time stepping on high-performance computing architectures

    Energy Technology Data Exchange (ETDEWEB)

    Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland); Institute of Geophysics, ETH Zurich (Switzerland); Grote, Marcus, E-mail: marcus.grote@unibas.ch [Department of Mathematics and Computer Science, University of Basel (Switzerland); Peter, Daniel, E-mail: daniel.peter@kaust.edu.sa [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland); Institute of Geophysics, ETH Zurich (Switzerland); Schenk, Olaf, E-mail: olaf.schenk@usi.ch [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland)

    2017-04-01

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.

  2. High Performance Hydrogen/Bromine Redox Flow Battery for Grid-Scale Energy Storage

    Energy Technology Data Exchange (ETDEWEB)

    Cho, KT; Ridgway, P; Weber, AZ; Haussener, S; Battaglia, V; Srinivasan, V

    2012-01-01

    The electrochemical behavior of a promising hydrogen/bromine redox flow battery is investigated for grid-scale energy-storage application with some of the best redox-flow-battery performance results to date, including a peak power of 1.4 W/cm(2) and a 91% voltaic efficiency at 0.4 W/cm(2) constant-power operation. The kinetics of bromine on various materials is discussed, with both rotating-disk-electrode and cell studies demonstrating that a carbon porous electrode for the bromine reaction can conduct platinum-comparable performance as long as sufficient surface area is realized. The effect of flow-cell designs and operating temperature is examined, and ohmic and mass-transfer losses are decreased by utilizing a flow-through electrode design and increasing cell temperature. Charge/discharge and discharge-rate tests also reveal that this system has highly reversible behavior and good rate capability. (C) 2012 The Electrochemical Society. [DOI: 10.1149/2.018211jes] All rights reserved.

  3. Dinámica de una metrópoli periférica en Brasil

    Directory of Open Access Journals (Sweden)

    Inaiá Maria Moreira de Carvalho

    2010-01-01

    Full Text Available En este artículo se analiza la evolución reciente de la segregación socioespacial y la de la conformación urbana en la ciudad de Salvador, a la luz del debate sobre las transformaciones de las metrópolis dentro del capital globalizado. Si bien se reconoce que todas las grandes ciudades terminan siendo alcanzadas por la globalización, en el texto se resalta, sin embargo, que los efectos de ese proceso no son uniformes ni convergen en un modelo único de ciudad. Es necesario considerar la conformación histórica de cada una de ellas, sus instituciones, actores y decisiones políticas locales dentro de una dinámica definida por la continuidad/transformación, donde lo que ya existía condiciona la irrupción de lo nuevo, que en muchos casos ya había comenzado a delinearse en el pasado. Mediante la demostración de la conformación de una metrópoli extremadamente desigual y segregada y la medida en que las transformaciones han agravado tales alteraciones al paso de los últimos años, esta revisión del caso de Salvador se propone exponer algunas reflexiones para entender mejor los efectos del proceso de globalización sobre las grandes ciudades de América Latina.

  4. PERC 2 High-End Computer System Performance: Scalable Science and Engineering

    Energy Technology Data Exchange (ETDEWEB)

    Daniel Reed

    2006-10-15

    During two years of SciDAC PERC-2, our activities had centered largely on development of new performance analysis techniques to enable efficient use on systems containing thousands or tens of thousands of processors. In addition, we continued our application engagement efforts and utilized our tools to study the performance of various SciDAC applications on a variety of HPC platforms.

  5. Performance prediction of industrial centrifuges using scale-down models.

    Science.gov (United States)

    Boychyn, M; Yim, S S S; Bulmer, M; More, J; Bracewell, D G; Hoare, M

    2004-12-01

    Computational fluid dynamics was used to model the high flow forces found in the feed zone of a multichamber-bowl centrifuge and reproduce these in a small, high-speed rotating disc device. Linking the device to scale-down centrifugation, permitted good estimation of the performance of various continuous-flow centrifuges (disc stack, multichamber bowl, CARR Powerfuge) for shear-sensitive protein precipitates. Critically, the ultra scale-down centrifugation process proved to be a much more accurate predictor of production multichamber-bowl performance than was the pilot centrifuge.

  6. Improving the Performance of the Extreme-scale Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2014-01-01

    Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation-based toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1) a new deadlock resolution protocol to reduce the parallel discrete event simulation management overhead and (2) a new simulated MPI message matching algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement, such as by reducing the simulation overhead for running the NAS Parallel Benchmark suite inside the simulator from 1,020\\% to 238% for the conjugate gradient (CG) benchmark and from 102% to 0% for the embarrassingly parallel (EP) and benchmark, as well as, from 37,511% to 13,808% for CG and from 3,332% to 204% for EP with accurate process failure simulation.

  7. Improved technique that allows the performance of large-scale SNP genotyping on DNA immobilized by FTA technology.

    Science.gov (United States)

    He, Hongbin; Argiro, Laurent; Dessein, Helia; Chevillard, Christophe

    2007-01-01

    FTA technology is a novel method designed to simplify the collection, shipment, archiving and purification of nucleic acids from a wide variety of biological sources. The number of punches that can normally be obtained from a single specimen card are often however, insufficient for the testing of the large numbers of loci required to identify genetic factors that control human susceptibility or resistance to multifactorial diseases. In this study, we propose an improved technique to perform large-scale SNP genotyping. We applied a whole genome amplification method to amplify DNA from buccal cell samples stabilized using FTA technology. The results show that using the improved technique it is possible to perform up to 15,000 genotypes from one buccal cell sample. Furthermore, the procedure is simple. We consider this improved technique to be a promising methods for performing large-scale SNP genotyping because the FTA technology simplifies the collection, shipment, archiving and purification of DNA, while whole genome amplification of FTA card bound DNA produces sufficient material for the determination of thousands of SNP genotypes.

  8. A New Metre for Cheap, Quick, Reliable and Simple Thermal Transmittance (U-Value) Measurements in Buildings.

    Science.gov (United States)

    Andújar Márquez, José Manuel; Martínez Bohórquez, Miguel Ángel; Gómez Melgar, Sergio

    2017-09-03

    This paper deals with the thermal transmittance measurement focused on buildings and specifically in building energy retrofitting. Today, if many thermal transmittance measurements in a short time are needed, the current devices, based on the measurement of the heat flow through the wall, cannot carry out them, except if a great amount of devices are used at once along with intensive and tedious post-processing and analysis work. In this paper, from well-known physical laws, authors develop a methodology based on three temperatures measurements, which is implemented by a novel thermal transmittance metre. The paper shows its development step by step. As a result the developed device is modular, scalable, and fully wireless; it is capable of taking as many measurements at once as user needs. The developed system is compared working together on a same test to the currently used one based on heat flow. The results show that the developed metre allows carrying out thermal transmittance measurements in buildings in a cheap, quick, reliable and simple way.

  9. Analysis of small-scale rotor hover performance data

    Science.gov (United States)

    Kitaplioglu, Cahit

    1990-01-01

    Rotor hover-performance data from a 1/6-scale helicopter rotor are analyzed and the data sets compared for the effects of ambient wind, test stand configuration, differing test facilities, and scaling. The data are also compared to full scale hover data. The data exhibited high scatter, not entirely due to ambient wind conditions. Effects of download on the test stand proved to be the most significant influence on the measured data. Small-scale data correlated resonably well with full scale data; the correlation did not improve with Reynolds number corrections.

  10. Skin and scales of teleost fish: Simple structure but high performance and multiple functions

    Science.gov (United States)

    Vernerey, Franck J.; Barthelat, Francois

    2014-08-01

    Natural and man-made structural materials perform similar functions such as structural support or protection. Therefore they rely on the same types of properties: strength, robustness, lightweight. Nature can therefore provide a significant source of inspiration for new and alternative engineering designs. We report here some results regarding a very common, yet largely unknown, type of biological material: fish skin. Within a thin, flexible and lightweight layer, fish skins display a variety of strain stiffening and stabilizing mechanisms which promote multiple functions such as protection, robustness and swimming efficiency. We particularly discuss four important features pertaining to scaled skins: (a) a strongly elastic tensile behavior that is independent from the presence of rigid scales, (b) a compressive response that prevents buckling and wrinkling instabilities, which are usually predominant for thin membranes, (c) a bending response that displays nonlinear stiffening mechanisms arising from geometric constraints between neighboring scales and (d) a robust structure that preserves the above characteristics upon the loss or damage of structural elements. These important properties make fish skin an attractive model for the development of very thin and flexible armors and protective layers, especially when combined with the high penetration resistance of individual scales. Scaled structures inspired by fish skin could find applications in ultra-light and flexible armor systems, flexible electronics or the design of smart and adaptive morphing structures for aerospace vehicles.

  11. Bamboo structures: evoke the spirit workshop [organisation, facilitation, research] Brescia, Italy; 1-14 July 2009

    OpenAIRE

    Kolakowski, Marcin M.; Thompson, Alan

    2010-01-01

    Student workshop run by MM Kolakowski and Alan Thompson for architectural students. Construction of large scale bamboo structures: 18-metre high tower, 9-metre high wheel arches and other bamboo constructions.

  12. Identifying High Performance ERP Projects

    OpenAIRE

    Stensrud, Erik; Myrtveit, Ingunn

    2002-01-01

    Learning from high performance projects is crucial for software process improvement. Therefore, we need to identify outstanding projects that may serve as role models. It is common to measure productivity as an indicator of performance. It is vital that productivity measurements deal correctly with variable returns to scale and multivariate data. Software projects generally exhibit variable returns to scale, and the output from ERP projects is multivariate. We propose to use Data Envelopment ...

  13. 2001, the ATLAS Cryostat Odyssey

    CERN Multimedia

    2001-01-01

    After a journey of several thousand kilometres, over sea and land, by canal and highway, the cryogenics barrel of the ATLAS electromagnetic calorimeter finally arrived at CERN last week. Installed in Hall 180, the cryogenics barrel of the ATLAS electromagnetic calorimeter will be fitted out to take the central superconducting solenoid and the electromagnetic calorimeter. On Monday 2 July, different French police units and EDF officials were once again keeping careful watch around the hairpin bends of the road twisting down from the Col de la Faucille: a special load weighing 100 tonnes, 7 metres high, 5.8 metres wide and 7.2 metres long was being brought down into the Pays de Gex to the Meyrin site of CERN. This time the destination was the ATLAS experiment. A huge blue tarpaulin cover concealed the cryogenics barrel of the experiment's liquid argon electromagnetic calorimeter. The cryostat consists of a vacuum chamber, a cylinder that is 5.5 metres in diameter, 7 metres long, and a concentric cold chamber ...

  14. SQDFT: Spectral Quadrature method for large-scale parallel O(N) Kohn-Sham calculations at high temperature

    Science.gov (United States)

    Suryanarayana, Phanish; Pratapa, Phanisri P.; Sharma, Abhiraj; Pask, John E.

    2018-03-01

    We present SQDFT: a large-scale parallel implementation of the Spectral Quadrature (SQ) method for O(N) Kohn-Sham Density Functional Theory (DFT) calculations at high temperature. Specifically, we develop an efficient and scalable finite-difference implementation of the infinite-cell Clenshaw-Curtis SQ approach, in which results for the infinite crystal are obtained by expressing quantities of interest as bilinear forms or sums of bilinear forms, that are then approximated by spatially localized Clenshaw-Curtis quadrature rules. We demonstrate the accuracy of SQDFT by showing systematic convergence of energies and atomic forces with respect to SQ parameters to reference diagonalization results, and convergence with discretization to established planewave results, for both metallic and insulating systems. We further demonstrate that SQDFT achieves excellent strong and weak parallel scaling on computer systems consisting of tens of thousands of processors, with near perfect O(N) scaling with system size and wall times as low as a few seconds per self-consistent field iteration. Finally, we verify the accuracy of SQDFT in large-scale quantum molecular dynamics simulations of aluminum at high temperature.

  15. Toward a theory of high performance.

    Science.gov (United States)

    Kirby, Julia

    2005-01-01

    What does it mean to be a high-performance company? The process of measuring relative performance across industries and eras, declaring top performers, and finding the common drivers of their success is such a difficult one that it might seem a fool's errand to attempt. In fact, no one did for the first thousand or so years of business history. The question didn't even occur to many scholars until Tom Peters and Bob Waterman released In Search of Excellence in 1982. Twenty-three years later, we've witnessed several more attempts--and, just maybe, we're getting closer to answers. In this reported piece, HBR senior editor Julia Kirby explores why it's so difficult to study high performance and how various research efforts--including those from John Kotter and Jim Heskett; Jim Collins and Jerry Porras; Bill Joyce, Nitin Nohria, and Bruce Roberson; and several others outlined in a summary chart-have attacked the problem. The challenge starts with deciding which companies to study closely. Are the stars the ones with the highest market caps, the ones with the greatest sales growth, or simply the ones that remain standing at the end of the game? (And when's the end of the game?) Each major study differs in how it defines success, which companies it therefore declares to be worthy of emulation, and the patterns of activity and attitude it finds in common among them. Yet, Kirby concludes, as each study's method incrementally solves problems others have faced, we are progressing toward a consensus theory of high performance.

  16. Experimental Evaluation for the Microvibration Performance of a Segmented PC Method Based High Technology Industrial Facility Using 1/2 Scale Test Models

    Directory of Open Access Journals (Sweden)

    Sijun Kim

    2017-01-01

    Full Text Available The precast concrete (PC method used in the construction process of high technology industrial facilities is limited when applied to those with greater span lengths, due to the transport length restriction (maximum length of 15~16 m in Korea set by traffic laws. In order to resolve this, this study introduces a structural system with a segmented PC system, and a 1/2 scale model with a width of 9000 mm (hereafter Segmented Model is manufactured to evaluate vibration performance. Since a real vibrational environment cannot be reproduced for vibration testing using a scale model, a comparative analysis of their relative performances is conducted in this study. For this purpose, a 1/2 scale model with a width of 7200 mm (hereafter Nonsegmented Model of a high technology industrial facility is additionally prepared using the conventional PC method. By applying the same experiment method for both scale models and comparing the results, the relative vibration performance of the Segmented Model is observed. Through impact testing, the natural frequencies of the two scale models are compared. Also, in order to analyze the estimated response induced by the equipment, the vibration responses due to the exciter are compared. The experimental results show that the Segmented Model exhibits similar or superior performances when compared to the Nonsegmented Model.

  17. Alternating current losses of a 10 metre long low loss superconducting cable conductor determined from phase sensitive measurements

    DEFF Research Database (Denmark)

    Olsen, Søren Krüger; Kühle (fratrådt), Anders Van Der Aa; Træholt, Chresten

    1999-01-01

    The ac loss of a superconducting cable conductor carrying an ac current is small. Therefore the ratio between the inductive (out-of-phase) and the resistive (in-phase) voltages over the conductor is correspondingly high. In vectorial representations this results in phase angles between the current......-in amplifiers can be exploited. In this paper we present the results from ac-loss measurements on a low loss 10 metre long high temperature superconducting cable conductor using such a correction scheme. Measurements were carried out with and without a compensation circuit that could reduce-the inductive...... voltage. The 1 mu V cm(-1) critical current of the conductor was 3240 A at 77 K. At an rms current of 2 kA (50 Hz) the ac loss was derived to be 0.6 +/- 0.15 W m(-1). This is, to the best of our knowledge, the lowest value of ac loss of a high temperature superconducting cable conductor reported so far...

  18. Exploiting Data Sparsity for Large-Scale Matrix Computations

    KAUST Repository

    Akbudak, Kadir

    2018-02-24

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  19. Exploiting Data Sparsity for Large-Scale Matrix Computations

    KAUST Repository

    Akbudak, Kadir; Ltaief, Hatem; Mikhalev, Aleksandr; Charara, Ali; Keyes, David E.

    2018-01-01

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  20. The ten thousand Kims

    Science.gov (United States)

    Baek, Seung Ki; Minnhagen, Petter; Kim, Beom Jun

    2011-07-01

    In Korean culture, the names of family members are recorded in special family books. This makes it possible to follow the distribution of Korean family names far back in history. It is shown here that these name distributions are well described by a simple null model, the random group formation (RGF) model. This model makes it possible to predict how the name distributions change and these predictions are shown to be borne out. In particular, the RGF model predicts that for married women entering a collection of family books in a certain year, the occurrence of the most common family name 'Kim' should be directly proportional to the total number of married women with the same proportionality constant for all the years. This prediction is also borne out to a high degree. We speculate that it reflects some inherent social stability in the Korean culture. In addition, we obtain an estimate of the total population of the Korean culture down to the year 500 AD, based on the RGF model, and find about ten thousand Kims.

  1. The ten thousand Kims

    International Nuclear Information System (INIS)

    Baek, Seung Ki; Minnhagen, Petter; Kim, Beom Jun

    2011-01-01

    In Korean culture, the names of family members are recorded in special family books. This makes it possible to follow the distribution of Korean family names far back in history. It is shown here that these name distributions are well described by a simple null model, the random group formation (RGF) model. This model makes it possible to predict how the name distributions change and these predictions are shown to be borne out. In particular, the RGF model predicts that for married women entering a collection of family books in a certain year, the occurrence of the most common family name 'Kim' should be directly proportional to the total number of married women with the same proportionality constant for all the years. This prediction is also borne out to a high degree. We speculate that it reflects some inherent social stability in the Korean culture. In addition, we obtain an estimate of the total population of the Korean culture down to the year 500 AD, based on the RGF model, and find about ten thousand Kims.

  2. Transit performance measures in California.

    Science.gov (United States)

    2016-04-01

    This research is the result of a California Department of Transportation (Caltrans) request to assess the most commonly : available transit performance measures in California. Caltrans wanted to understand performance measures and data used by : Metr...

  3. Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data.

    Science.gov (United States)

    Aji, Ablimit; Wang, Fusheng; Saltz, Joel H

    2012-11-06

    Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the "big data" challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce.

  4. High performance Spark best practices for scaling and optimizing Apache Spark

    CERN Document Server

    Karau, Holden

    2017-01-01

    Apache Spark is amazing when everything clicks. But if you haven’t seen the performance improvements you expected, or still don’t feel confident enough to use Spark in production, this practical book is for you. Authors Holden Karau and Rachel Warren demonstrate performance optimizations to help your Spark queries run faster and handle larger data sizes, while using fewer resources. Ideal for software engineers, data engineers, developers, and system administrators working with large-scale data applications, this book describes techniques that can reduce data infrastructure costs and developer hours. Not only will you gain a more comprehensive understanding of Spark, you’ll also learn how to make it sing. With this book, you’ll explore: How Spark SQL’s new interfaces improve performance over SQL’s RDD data structure The choice between data joins in Core Spark and Spark SQL Techniques for getting the most out of standard RDD transformations How to work around performance issues i...

  5. Multi-Scale Texturing of Metallic Surfaces for High Performance Military Systems

    Science.gov (United States)

    2015-08-17

    AISI 440C stainless steel balls of 3 mm radius and 690 HV hardness. The sliding time (20 min), amplitude (10.5 mm), frequency (1.5 Hz) and normal...texture form (e. g., micro-scale topography) on surface integrity measures and tribological wear performance were quantified. The ensuing results are... Tribological Applications, Submitted for publication (08 2015) TOTAL: 2 Books Number of Manuscripts: Patents Submitted Patents Awarded Awards Graduate Students

  6. Extreme-Scale De Novo Genome Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Georganas, Evangelos [Intel Corporation, Santa Clara, CA (United States); Hofmeyr, Steven [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Genome Inst.; Egan, Rob [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Buluc, Aydin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Genome Inst.; Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Genome Inst.; Rokhsar, Daniel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Yelick, Katherine [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Genome Inst.

    2017-09-26

    De novo whole genome assembly reconstructs genomic sequence from short, overlapping, and potentially erroneous DNA segments and is one of the most important computations in modern genomics. This work presents HipMER, a high-quality end-to-end de novo assembler designed for extreme scale analysis, via efficient parallelization of the Meraculous code. Genome assembly software has many components, each of which stresses different components of a computer system. This chapter explains the computational challenges involved in each step of the HipMer pipeline, the key distributed data structures, and communication costs in detail. We present performance results of assembling the human genome and the large hexaploid wheat genome on large supercomputers up to tens of thousands of cores.

  7. Analysis of full scale impact into an abutment

    International Nuclear Information System (INIS)

    Fullard, K.; Dowler, H.J.; Soanes, T.P.T.

    1985-01-01

    A 60mph impact into a tunnel abutment, of a flask on a railway flatrol with following vehicles, is shown to be a much less severe event for the flask than a 9 metre drop test to IAEA regulations. This involves the use of mathematical models of the full scale event of the same type as were employed in studying the behaviour of quarter scale models. The latter were subject to actual impact testing as part of the validation process. (author)

  8. Analysis of results of checks IMRT in almost a thousand patients

    International Nuclear Information System (INIS)

    Richart, J.; Doval, S.; Perez-Calatayud, J.; Depieaggio, M.; Rodriguez, S.; Santos, M.

    2013-01-01

    Since November 2006 IMRT treatments being made in the mode of sliding-window in our Hospital. The major sites of application of this technique are: head and neck, prostate, and gynecological. Specific checks are performed of each plan both yield and analysis ionometric extent in which a dummy was exported IMRT plan. Over one thousand patients, the objective of this work is the presentation and analysis of results. (Author)

  9. Psychological variables and Wechsler Adult Intelligence Scale-IV performance.

    Science.gov (United States)

    Gass, Carlton S; Gutierrez, Laura

    2017-01-01

    The MMPI-2 and WAIS-IV are commonly used together in neuropsychological evaluations yet little is known about their interrelationships. This study explored the potential influence of psychological factors on WAIS-IV performance in a sample of 180 predominantly male veteran referrals that underwent a comprehensive neuropsychological examination in a VA Medical Center. Exclusionary criteria included failed performance validity testing and self-report distortion on the MMPI-2. A Principal Components Analysis was performed on the 15 MMPI-2 content scales, yielding three broader higher-order psychological dimensions: Internalized Emotional Dysfunction (IED), Externalized Emotional Dysfunction (EED), and Fear. Level of IED was not related to performance on the WAIS-IV Full Scale IQ or its four indexes: (Verbal Comprehension, Perceptual Reasoning, Working Memory, and Processing Speed). EED was not related to WAIS-IV performance. Level of Fear, which encompasses health preoccupations (HEA) and distorted perceptions (BIZ), was significantly related to WAIS-IV Full Scale IQ and Verbal Comprehension. These results challenge the common use of high scores on the MMPI-2 IED measures (chiefly depression and anxiety) to explain deficient WAIS-IV performance. In addition, they provide impetus for further investigation of the relation between verbal intelligence and Fear.

  10. Aerodynamic problems of cable-stayed bridges spanning over one thousand meters

    Institute of Scientific and Technical Information of China (English)

    Chen Airong; Ma Rujin; Wang Dalei

    2009-01-01

    Tbe elongating of cable-stayed bridge brings a series of aerodynamic problems. First of all, geometric nonlin-ear effect of extreme long cable is much more significant for cable-stayed bridge spanning over one thousand meters. Lat-eral static wind load will generate additional displacement of long cables, which causes the decrease of supporting rigidi-ty of the whole bridge and the change of dynamic properties. Wind load, being the controlling load in the design of ca-hie-stayed bridge, is a critical problem and needs to be solved. Meanwhile, research on suitable system between pylon and deck indicates fixed-fixed connection system is an effective way for improvement performance of cable-stayed bridges under longitudinal wind load. In order to obtain aerodynamic parameters of cable-stayed bridge spanning over one thou-sand meters, identification method for flutter derivatives of full bridge aero-elastic model is developed in this paper. Furthermore, vortex induced vibration and Reynolds number effect are detailed discussed.

  11. ISC High Performance 2017 International Workshops, DRBSD, ExaComm, HCPM, HPC-IODC, IWOPH, IXPUG, P^3MA, VHPC, Visualization at Scale, WOPSSS

    CERN Document Server

    Yokota, Rio; Taufer, Michela; Shalf, John

    2017-01-01

    This book constitutes revised selected papers from 10 workshops that were held as the ISC High Performance 2017 conference in Frankfurt, Germany, in June 2017. The 59 papers presented in this volume were carefully reviewed and selected for inclusion in this book. They stem from the following workshops: Workshop on Virtualization in High-Performance Cloud Computing (VHPC) Visualization at Scale: Deployment Case Studies and Experience Reports International Workshop on Performance Portable Programming Models for Accelerators (P^3MA) OpenPOWER for HPC (IWOPH) International Workshop on Data Reduction for Big Scientific Data (DRBSD) International Workshop on Communication Architectures for HPC, Big Data, Deep Learning and Clouds at Extreme Scale Workshop on HPC Computing in a Post Moore's Law World (HCPM) HPC I/O in the Data Center ( HPC-IODC) Workshop on Performance and Scalability of Storage Systems (WOPSSS) IXPUG: Experiences on Intel Knights Landing at the One Year Mark International Workshop on Communicati...

  12. %HPGLIMMIX: A High-Performance SAS Macro for GLMM Estimation

    Directory of Open Access Journals (Sweden)

    Liang Xie

    2014-06-01

    Full Text Available Generalized linear mixed models (GLMMs comprise a class of widely used statistical tools for data analysis with fixed and random effects when the response variable has a conditional distribution in the exponential family. GLMM analysis also has a close relationship with actuarial credibility theory. While readily available programs such as the GLIMMIX procedure in SAS and the lme4 package in R are powerful tools for using this class of models, these progarms are not able to handle models with thousands of levels of fixed and random effects. By using sparse-matrix and other high performance techniques, procedures such as HPMIXED in SAS can easily fit models with thousands of factor levels, but only for normally distributed response variables. In this paper, we present the %HPGLIMMIX SAS macro that fits GLMMs with large number of sparsely populated design matrices using the doubly-iterative linearization (pseudo-likelihood method, in which the sparse-matrix-based HPMIXED is used for the inner iterations with the pseudo-variable constructed from the inverse-link function and the chosen model. Although the macro does not have the full functionality of the GLIMMIX procedure, time and memory savings can be large with the new macro. In applications in which design matrices contain many zeros and there are hundreds or thousands of factor levels, models can be fitted without exhausting computer memory, and 90% or better reduction in running time can be observed. Examples with a Poisson, binomial, and gamma conditional distribution are presented to demonstrate the usage and efficiency of this macro.

  13. Macroscopic High-Temperature Structural Analysis Model of Small-Scale PCHE Prototype (II)

    International Nuclear Information System (INIS)

    Song, Kee Nam; Lee, Heong Yeon; Hong, Sung Deok; Park, Hong Yoon

    2011-01-01

    The IHX (intermediate heat exchanger) of a VHTR (very high-temperature reactor) is a core component that transfers the high heat generated by the VHTR at 950 .deg. C to a hydrogen production plant. Korea Atomic Energy Research Institute manufactured a small-scale prototype of a PCHE (printed circuit heat exchanger) that was being considered as a candidate for the IHX. In this study, as a part of high-temperature structural integrity evaluation of the small-scale PCHE prototype, we carried out high-temperature structural analysis modeling and macroscopic thermal and elastic structural analysis for the small-scale PCHE prototype under small-scale gas-loop test conditions. The modeling and analysis were performed as a precedent study prior to the performance test in the small-scale gas loop. The results obtained in this study will be compared with the test results for the small-scale PCHE. Moreover, these results will be used in the design of a medium-scale PCHE prototype

  14. 8th International Workshop on Parallel Tools for High Performance Computing

    CERN Document Server

    Gracia, José; Knüpfer, Andreas; Resch, Michael; Nagel, Wolfgang

    2015-01-01

    Numerical simulation and modelling using High Performance Computing has evolved into an established technique in academic and industrial research. At the same time, the High Performance Computing infrastructure is becoming ever more complex. For instance, most of the current top systems around the world use thousands of nodes in which classical CPUs are combined with accelerator cards in order to enhance their compute power and energy efficiency. This complexity can only be mastered with adequate development and optimization tools. Key topics addressed by these tools include parallelization on heterogeneous systems, performance optimization for CPUs and accelerators, debugging of increasingly complex scientific applications, and optimization of energy usage in the spirit of green IT. This book represents the proceedings of the 8th International Parallel Tools Workshop, held October 1-2, 2014 in Stuttgart, Germany – which is a forum to discuss the latest advancements in the parallel tools.

  15. Performance evaluation of the DCMD desalination process under bench scale and large scale module operating conditions

    KAUST Repository

    Francis, Lijo

    2014-04-01

    The flux performance of different hydrophobic microporous flat sheet commercial membranes made of poly tetrafluoroethylene (PTFE) and poly propylene (PP) was tested for Red Sea water desalination using the direct contact membrane distillation (DCMD) process, under bench scale (high δT) and large scale module (low δT) operating conditions. Membranes were characterized for their surface morphology, water contact angle, thickness, porosity, pore size and pore size distribution. The DCMD process performance was optimized using a locally designed and fabricated module aiming to maximize the flux at different levels of operating parameters, mainly feed water and coolant inlet temperatures at different temperature differences across the membrane (δT). Water vapor flux of 88.8kg/m2h was obtained using a PTFE membrane at high δT (60°C). In addition, the flux performance was compared to the first generation of a new locally synthesized and fabricated membrane made of a different class of polymer under the same conditions. A total salt rejection of 99.99% and boron rejection of 99.41% were achieved under extreme operating conditions. On the other hand, a detailed water characterization revealed that low molecular weight non-ionic molecules (ppb level) were transported with the water vapor molecules through the membrane structure. The membrane which provided the highest flux was then tested under large scale module operating conditions. The average flux of the latter study (low δT) was found to be eight times lower than that of the bench scale (high δT) operating conditions.

  16. Performance evaluation of the DCMD desalination process under bench scale and large scale module operating conditions

    KAUST Repository

    Francis, Lijo; Ghaffour, NorEddine; Alsaadi, Ahmad Salem; Nunes, Suzana Pereira; Amy, Gary L.

    2014-01-01

    The flux performance of different hydrophobic microporous flat sheet commercial membranes made of poly tetrafluoroethylene (PTFE) and poly propylene (PP) was tested for Red Sea water desalination using the direct contact membrane distillation (DCMD) process, under bench scale (high δT) and large scale module (low δT) operating conditions. Membranes were characterized for their surface morphology, water contact angle, thickness, porosity, pore size and pore size distribution. The DCMD process performance was optimized using a locally designed and fabricated module aiming to maximize the flux at different levels of operating parameters, mainly feed water and coolant inlet temperatures at different temperature differences across the membrane (δT). Water vapor flux of 88.8kg/m2h was obtained using a PTFE membrane at high δT (60°C). In addition, the flux performance was compared to the first generation of a new locally synthesized and fabricated membrane made of a different class of polymer under the same conditions. A total salt rejection of 99.99% and boron rejection of 99.41% were achieved under extreme operating conditions. On the other hand, a detailed water characterization revealed that low molecular weight non-ionic molecules (ppb level) were transported with the water vapor molecules through the membrane structure. The membrane which provided the highest flux was then tested under large scale module operating conditions. The average flux of the latter study (low δT) was found to be eight times lower than that of the bench scale (high δT) operating conditions.

  17. High performance parallel I/O

    CERN Document Server

    Prabhat

    2014-01-01

    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  18. Implications of extreme life span in clonal organisms: millenary clones in meadows of the threatened seagrass Posidonia oceanica.

    Directory of Open Access Journals (Sweden)

    Sophie Arnaud-Haond

    Full Text Available The maximum size and age that clonal organisms can reach remains poorly known, although we do know that the largest natural clones can extend over hundreds or thousands of metres and potentially live for centuries. We made a review of findings to date, which reveal that the maximum clone age and size estimates reported in the literature are typically limited by the scale of sampling, and may grossly underestimate the maximum age and size of clonal organisms. A case study presented here shows the occurrence of clones of slow-growing marine angiosperm Posidonia oceanica at spatial scales ranging from metres to hundreds of kilometres, using microsatellites on 1544 sampling units from a total of 40 locations across the Mediterranean Sea. This analysis revealed the presence, with a prevalence of 3.5 to 8.9%, of very large clones spreading over one to several (up to 15 kilometres at the different locations. Using estimates from field studies and models of the clonal growth of P. oceanica, we estimated these large clones to be hundreds to thousands of years old, suggesting the evolution of general purpose genotypes with large phenotypic plasticity in this species. These results, obtained combining genetics, demography and model-based calculations, question present knowledge and understanding of the spreading capacity and life span of plant clones. These findings call for further research on these life history traits associated with clonality, considering their possible ecological and evolutionary implications.

  19. High performance flexible CMOS SOI FinFETs

    KAUST Repository

    Fahad, Hossain M.

    2014-06-01

    We demonstrate the first ever CMOS compatible soft etch back based high performance flexible CMOS SOI FinFETs. The move from planar to non-planar FinFETs has enabled continued scaling down to the 14 nm technology node. This has been possible due to the reduction in off-state leakage and reduced short channel effects on account of the superior electrostatic charge control of multiple gates. At the same time, flexible electronics is an exciting expansion opportunity for next generation electronics. However, a fully integrated low-cost system will need to maintain ultra-large-scale-integration density, high performance and reliability - same as today\\'s traditional electronics. Up until recently, this field has been mainly dominated by very weak performance organic electronics enabled by low temperature processes, conducive to low melting point plastics. Now however, we show the world\\'s highest performing flexible version of 3D FinFET CMOS using a state-of-the-art CMOS compatible fabrication technique for high performance ultra-mobile consumer applications with stylish design. © 2014 IEEE.

  20. High-frequency, scaled graphene transistors on diamond-like carbon

    NARCIS (Netherlands)

    Wu, Y.; Lin, Y.M.; Bol, A.A.; Jenkins, K.A.; Xia, F.; Farmer, D.B.; Zu, Y.; Avouris, Ph.

    2011-01-01

    Owing to its high carrier mobility and saturation velocity, graphene has attracted enormous attention in recent years In particular, high-performance graphene transistors for radio-frequency (r.f.) applications are of great interest. Synthesis of large-scale graphene sheets of high quality and at

  1. Teaching Thousands with Cloud-based GIS

    Science.gov (United States)

    Gould, Michael; DiBiase, David; Beale, Linda

    2016-04-01

    Teaching Thousands with Cloud-based GIS Educators often draw a distinction between "teaching about GIS" and "teaching with GIS." Teaching about GIS involves helping students learn what GIS is, what it does, and how it works. On the other hand, teaching with GIS involves using the technology as a means to achieve education objectives in the sciences, social sciences, professional disciplines like engineering and planning, and even the humanities. The same distinction applies to CyberGIS. Understandably, early efforts to develop CyberGIS curricula and educational resources tend to be concerned primarily with CyberGIS itself. However, if CyberGIS becomes as functional, usable and scalable as it aspires to be, teaching with CyberGIS has the potential to enable large and diverse global audiences to perform spatial analysis using hosted data, mapping and analysis services all running in the cloud. Early examples of teaching tens of thousands of students across the globe with cloud-based GIS include the massive open online courses (MOOCs) offered by Penn State University and others, as well as the series of MOOCs more recently developed and offered by Esri. In each case, ArcGIS Online was used to help students achieve educational objectives in subjects like business, geodesign, geospatial intelligence, and spatial analysis, as well as mapping. Feedback from the more than 100,000 total student participants to date, as well as from the educators and staff who supported these offerings, suggest that online education with cloud-based GIS is scalable to very large audiences. Lessons learned from the course design, development, and delivery of these early examples may be useful in informing the continuing development of CyberGIS education. While MOOCs may have passed the peak of their "hype cycle" in higher education, the phenomenon they revealed persists: namely, a global mass market of educated young adults who turn to free online education to expand their horizons. The

  2. Thousands of primer-free, high-quality, full-length SSU rRNA sequences from all domains of life

    DEFF Research Database (Denmark)

    Karst, Soeren M; Dueholm, Morten S; McIlroy, Simon J

    2016-01-01

    Ribosomal RNA (rRNA) genes are the consensus marker for determination of microbial diversity on the planet, invaluable in studies of evolution and, for the past decade, high-throughput sequencing of variable regions of ribosomal RNA genes has become the backbone of most microbial ecology studies...... (SSU) rRNA genes and synthetic long read sequencing by molecular tagging, to generate primer-free, full-length SSU rRNA gene sequences from all domains of life, with a median raw error rate of 0.17%. We generated thousands of full-length SSU rRNA sequences from five well-studied ecosystems (soil, human...... gut, fresh water, anaerobic digestion, and activated sludge) and obtained sequences covering all domains of life and the majority of all described phyla. Interestingly, 30% of all bacterial operational taxonomic units were novel, compared to the SILVA database (less than 97% similarity...

  3. Saúde nas metrópoles - Doenças infecciosas

    Directory of Open Access Journals (Sweden)

    Aluisio Cotrim Segurado

    2016-04-01

    Full Text Available A urbanização é um processo irreversível em escala mundial e estima-se que o número de pessoas que vivem em cidades deverá atingir 67% da população do planeta até 2050. Os países de baixa ou média renda, por sua vez, possuem 30% a 40% da população urbana vivendo atualmente em favelas, em situação de risco para diversos agravos de saúde. No Brasil, embora 84,3% da população residissem em áreas urbanas já em 2010, não se verificam no momento ações consistentes voltadas ao enfrentamento das questões de saúde urbana. Neste artigo discute-se a situação epidemiológica de agravos infecciosos de interesse para a saúde pública (dengue, infecção por HIV/aids, leptospirose, hanseníase e tuberculose a partir do ano 2000 nas 17 metrópoles do país, de modo a esclarecer o papel atual das doenças infecciosas no contexto da saúde urbana brasileira.

  4. Large-scale performance studies of the Resistive Plate Chamber fast tracker for the ATLAS 1st-level muon trigger

    CERN Document Server

    Cattani, G; The ATLAS collaboration

    2009-01-01

    In the ATLAS experiment, Resistive Plate Chambers provide the first-level muon trigger and bunch crossing identification over large area of the barrel region, as well as being used as a very fast 2D tracker. To achieve these goals a system of about ~4000 gas gaps operating in avalanche mode was built (resulting in a total readout surface of about 16000 m2 segmented into 350000 strips) and is now fully operational in the ATLAS pit, where its functionality has been widely tested up to now using cosmic rays. Such a large scale system allows to study the performance of RPCs (both from the point of view of gas gaps and readout electronics) with unprecedented sensitivity to rare effects, as well as providing the means to correlate (in a statistically significant way) characteristics at production sites with performance during operation. Calibrating such a system means fine tuning thousands of parameters (involving both front-end electronics and gap voltage), as well as constantly monitoring performance and environm...

  5. Wafer-scale micro-optics fabrication

    Science.gov (United States)

    Voelkel, Reinhard

    2012-07-01

    Micro-optics is an indispensable key enabling technology for many products and applications today. Probably the most prestigious examples are the diffractive light shaping elements used in high-end DUV lithography steppers. Highly-efficient refractive and diffractive micro-optical elements are used for precise beam and pupil shaping. Micro-optics had a major impact on the reduction of aberrations and diffraction effects in projection lithography, allowing a resolution enhancement from 250 nm to 45 nm within the past decade. Micro-optics also plays a decisive role in medical devices (endoscopes, ophthalmology), in all laser-based devices and fiber communication networks, bringing high-speed internet to our homes. Even our modern smart phones contain a variety of micro-optical elements. For example, LED flash light shaping elements, the secondary camera, ambient light and proximity sensors. Wherever light is involved, micro-optics offers the chance to further miniaturize a device, to improve its performance, or to reduce manufacturing and packaging costs. Wafer-scale micro-optics fabrication is based on technology established by the semiconductor industry. Thousands of components are fabricated in parallel on a wafer. This review paper recapitulates major steps and inventions in wafer-scale micro-optics technology. The state-of-the-art of fabrication, testing and packaging technology is summarized.

  6. NWChem: Quantum Chemistry Simulations at Scale

    Energy Technology Data Exchange (ETDEWEB)

    Apra, Edoardo; Kowalski, Karol; Hammond, Jeff R.; Klemm, Michael

    2015-01-17

    Methods based on quantum mechanics equations have been developed since the 1930's with the purpose of accurately studying the electronic structure of molecules. However, it is only during the last two decades that intense development of new computational algorithms has opened the possibility of performing accurate simulations of challenging molecular processes with high-order many-body methods. A wealth of evidence indicates that the proper inclusion of instantaneous interactions between electrons (or the so-called electron correlation effects) is indispensable for the accurate characterization of chemical reactivity, molecular properties, and interactions of light with matter. The availability of reliable methods for benchmarking of medium-size molecular systems provides also a unique chance to propagate high-level accuracy across spatial scales through the multiscale methodologies. Some of these methods have potential to utilize computational resources in an effi*cient way since they are characterized by high numerical complexity and appropriate level of data granularity, which can be effi*ciently distributed over multi-processor architectures. The broad spectrum of coupled cluster (CC) methods falls into this class of methodologies. Several recent CC implementations clearly demonstrated the scalability of CC formalisms on architectures composed of hundreds thousand computational cores. In this context NWChem provides a collection of Tensor Contraction Engine (TCE) generated parallel implementations of various coupled cluster methods capable of taking advantage of many thousand of cores on leadership class parallel architectures.

  7. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    Energy Technology Data Exchange (ETDEWEB)

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K. [Cray Inc., St. Paul, MN 55101 (United States); Porter, D. [Minnesota Supercomputing Institute for Advanced Computational Research, Minneapolis, MN USA (United States); O’Neill, B. J.; Nolting, C.; Donnert, J. M. F.; Jones, T. W. [School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455 (United States); Edmon, P., E-mail: pjm@cray.com, E-mail: nradclif@cray.com, E-mail: kkandalla@cray.com, E-mail: oneill@astro.umn.edu, E-mail: nolt0040@umn.edu, E-mail: donnert@ira.inaf.it, E-mail: twj@umn.edu, E-mail: dhp@umn.edu, E-mail: pedmon@cfa.harvard.edu [Institute for Theory and Computation, Center for Astrophysics, Harvard University, Cambridge, MA 02138 (United States)

    2017-02-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.

  8. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    International Nuclear Information System (INIS)

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K.; Porter, D.; O’Neill, B. J.; Nolting, C.; Donnert, J. M. F.; Jones, T. W.; Edmon, P.

    2017-01-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.

  9. Performance Health Monitoring of Large-Scale Systems

    Energy Technology Data Exchange (ETDEWEB)

    Rajamony, Ram [IBM Research, Austin, TX (United States)

    2014-11-20

    This report details the progress made on the ASCR funded project Performance Health Monitoring for Large Scale Systems. A large-­scale application may not achieve its full performance potential due to degraded performance of even a single subsystem. Detecting performance faults, isolating them, and taking remedial action is critical for the scale of systems on the horizon. PHM aims to develop techniques and tools that can be used to identify and mitigate such performance problems. We accomplish this through two main aspects. The PHM framework encompasses diagnostics, system monitoring, fault isolation, and performance evaluation capabilities that indicates when a performance fault has been detected, either due to an anomaly present in the system itself or due to contention for shared resources between concurrently executing jobs. Software components called the PHM Control system then build upon the capabilities provided by the PHM framework to mitigate degradation caused by performance problems.

  10. submitter Unified Scaling Law for flux pinning in practical superconductors: II. Parameter testing, scaling constants, and the Extrapolative Scaling Expression

    CERN Document Server

    Ekin, Jack W; Goodrich, Loren; Splett, Jolene; Bordini, Bernardo; Richter, David

    2016-01-01

    A scaling study of several thousand Nb$_{3}$Sn critical-current $(I_c)$ measurements is used to derive the Extrapolative Scaling Expression (ESE), a relation that can quickly and accurately extrapolate limited datasets to obtain full three-dimensional dependences of I c on magnetic field (B), temperature (T), and mechanical strain (ε). The relation has the advantage of being easy to implement, and offers significant savings in sample characterization time and a useful tool for magnet design. Thorough data-based analysis of the general parameterization of the Unified Scaling Law (USL) shows the existence of three universal scaling constants for practical Nb$_{3}$Sn conductors. The study also identifies the scaling parameters that are conductor specific and need to be fitted to each conductor. This investigation includes two new, rare, and very large I c(B,T,ε) datasets (each with nearly a thousand I c measurements spanning magnetic fields from 1 to 16 T, temperatures from ~2.26 to 14 K, and intrinsic strain...

  11. BEAGLE: an application programming interface and high-performance computing library for statistical phylogenetics.

    Science.gov (United States)

    Ayres, Daniel L; Darling, Aaron; Zwickl, Derrick J; Beerli, Peter; Holder, Mark T; Lewis, Paul O; Huelsenbeck, John P; Ronquist, Fredrik; Swofford, David L; Cummings, Michael P; Rambaut, Andrew; Suchard, Marc A

    2012-01-01

    Phylogenetic inference is fundamental to our understanding of most aspects of the origin and evolution of life, and in recent years, there has been a concentration of interest in statistical approaches such as Bayesian inference and maximum likelihood estimation. Yet, for large data sets and realistic or interesting models of evolution, these approaches remain computationally demanding. High-throughput sequencing can yield data for thousands of taxa, but scaling to such problems using serial computing often necessitates the use of nonstatistical or approximate approaches. The recent emergence of graphics processing units (GPUs) provides an opportunity to leverage their excellent floating-point computational performance to accelerate statistical phylogenetic inference. A specialized library for phylogenetic calculation would allow existing software packages to make more effective use of available computer hardware, including GPUs. Adoption of a common library would also make it easier for other emerging computing architectures, such as field programmable gate arrays, to be used in the future. We present BEAGLE, an application programming interface (API) and library for high-performance statistical phylogenetic inference. The API provides a uniform interface for performing phylogenetic likelihood calculations on a variety of compute hardware platforms. The library includes a set of efficient implementations and can currently exploit hardware including GPUs using NVIDIA CUDA, central processing units (CPUs) with Streaming SIMD Extensions and related processor supplementary instruction sets, and multicore CPUs via OpenMP. To demonstrate the advantages of a common API, we have incorporated the library into several popular phylogenetic software packages. The BEAGLE library is free open source software licensed under the Lesser GPL and available from http://beagle-lib.googlecode.com. An example client program is available as public domain software.

  12. Metrópole digital: o jovem aprendiz na educação tecnológica

    OpenAIRE

    Assunção, Zoraia da Silva

    2014-01-01

    Elaborada no cenário do Instituto Metrópole Digital (IMD) – unidade suplementar da Universidade Federal do Rio Grande do Norte que atua na formação de pessoal com cursos de nível técnico e superior, sendo a formação de nível técnico, associada a um processo de inclusão digital, com o propósito de atrair jovens para a área de TI, com ênfases em Desenvolvimento de Software e Hardware –, esta Tese objetiva Investigar a mudança cognitiva de jovem aprendiz da Educação Tecnológica e ...

  13. High-performance scalable Information Service for the ATLAS experiment

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Hauser, R

    2012-01-01

    The ATLAS experiment is being operated by highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to access the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS TDAQ project. The IS provides high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about hundred gigabytes of information which is being constantly updated with the update interval varying from a second to few tens of seconds. IS ...

  14. The design of an acoustic data link for a deep-sea probe

    International Nuclear Information System (INIS)

    Coates, R.; Mathams, R.F.; Owens, A.R.

    1986-01-01

    The report describes a digital computer simulation of the performance of possible acoustic digital data link designs for use with a deep ocean penetrometer. It concludes with a description of the acoustic and electronic parts of a prototype system. The digital computer was developed with the assumption that the transmitter would need to be able to pass a low error rate data signal vertically upwards through some tens of metres of sea-bed sediment as well as some thousands of metres of sea water. The model allowed for variability in sediment attenuation, sea-state, transmitter power and modulation technique. It was concluded that, at acceptable transmitter powers, a useable signal should be recoverable under all expected environmental conditions. The prototype system was built and tested in laboratory conditions. The tests indicated that satisfactory performance should be achievable with a field equipment derived from this prototype. (author)

  15. High-Performance Carbon Dioxide Electrocatalytic Reduction by Easily Fabricated Large-Scale Silver Nanowire Arrays.

    Science.gov (United States)

    Luan, Chuhao; Shao, Yang; Lu, Qi; Gao, Shenghan; Huang, Kai; Wu, Hui; Yao, Kefu

    2018-05-17

    An efficient and selective catalyst is in urgent need for carbon dioxide electroreduction and silver is one of the promising candidates with affordable costs. Here we fabricated large-scale vertically standing Ag nanowire arrays with high crystallinity and electrical conductivity as carbon dioxide electroreduction catalysts by a simple nanomolding method that was usually considered not feasible for metallic crystalline materials. A great enhancement of current densities and selectivity for CO at moderate potentials was achieved. The current density for CO ( j co ) of Ag nanowire array with 200 nm in diameter was more than 2500 times larger than that of Ag foil at an overpotential of 0.49 V with an efficiency over 90%. The origin of enhanced performances are attributed to greatly increased electrochemically active surface area (ECSA) and higher intrinsic activity compared to those of polycrystalline Ag foil. More low-coordinated sites on the nanowires which can stabilize the CO 2 intermediate better are responsible for the high intrinsic activity. In addition, the impact of surface morphology that induces limited mass transportation on reaction selectivity and efficiency of nanowire arrays with different diameters was also discussed.

  16. A policy hackathon for analysing impacts and solutions up to 20 metres sea-level rise

    Science.gov (United States)

    Haasnoot, Marjolijn; Bouwer, Laurens; Kwadijk, Jaap

    2017-04-01

    We organised a policy hackathon in order to quantify the impacts accelerated and high-end sea-level rise up to 20 metres on the coast of the Netherlands, and develop possible solutions. This was done during one day, with 20 experts that had a wide variety of disciplines, including hydrology, geology, coastal engineering, economics, and public policy. During the process the problem was divided up into several sub-sets of issues that were analysed and solved within small teams of between 4 to 8 people. Both a top-down impact analysis and bottom-up vulnerability analysis was done by answering the questions: What is the impact of sea level rise of x meter?; and How much sea level rise can be accommodated with before transformative actions are needed? Next, adaptation tipping points were identified that describe conditions under which the coastal system starts to perform unacceptably. Reasons for an adaptation tipping point can be technical (technically not possible), economic (cost-benefits are negative), or resources (available space, sand, energy production, financial). The results are presented in a summary document, and through an infographic displaying different adaptation tipping points and milestones that occur when the sea level rises up to 20 m. No technical limitations were found for adaptation, but many important decisions need to be taken. Although accelerated sea level rise seems far away it can have important consequences for short-term decisions that are required for transformative actions. Such extensive actions require more time for implementation. Also, other action may become ineffective before their design life. This hackathon exercise shows that it is possible to map within a short time frame the issues at hand, as well as potentially effective solutions. This can be replicated for other problems, and can be useful for several decision-makers that require quick but in-depth analysis of their long-term planning problems.

  17. De la autoformación como práctica instituyente en las metrópolis postfordistas

    OpenAIRE

    Miquel Bartual, María José

    2013-01-01

    A través de un recorrido teórico- práctico, nos proponemos investigar cómo, en el contexto de las metrópolis postfordistas, determinados procesos de autoformación pueden llevar a transformaciones sociales en el ámbito político. En esta premisa, analizaremos previamente si la autoformación constituye en sí un cuestionamiento de los lugares asignados en la redistribución de los saberes, vinculada a procesos de cooperación social y resistencia, así como a la generación de instituciones del comú...

  18. Enabling High Performance Large Scale Dense Problems through KBLAS

    KAUST Repository

    Abdelfattah, Ahmad

    2014-05-04

    KBLAS (KAUST BLAS) is a small library that provides highly optimized BLAS routines on systems accelerated with GPUs. KBLAS is entirely written in CUDA C, and targets NVIDIA GPUs with compute capability 2.0 (Fermi) or higher. The current focus is on level-2 BLAS routines, namely the general matrix vector multiplication (GEMV) kernel, and the symmetric/hermitian matrix vector multiplication (SYMV/HEMV) kernel. KBLAS provides these two kernels in all four precisions (s, d, c, and z), with support to multi-GPU systems. Through advanced optimization techniques that target latency hiding and pushing memory bandwidth to the limit, KBLAS outperforms state-of-the-art kernels by 20-90% improvement. Competitors include CUBLAS-5.5, MAGMABLAS-1.4.0, and CULAR17. The SYMV/HEMV kernel from KBLAS has been adopted by NVIDIA, and should appear in CUBLAS-6.0. KBLAS has been used in large scale simulations of multi-object adaptive optics.

  19. High-performance cement-based grouts for use in a nuclear waste disposal facility

    International Nuclear Information System (INIS)

    Onofrei, M.; Gray, M.N.

    1992-12-01

    National and international agencies have identified cement-based materials as prime candidates for sealing vaults that would isolate nuclear fuel wastes from the biosphere. Insufficient information is currently available to allow a reasonable analysis of the long-term performance of these sealing materials in a vault. A combined laboratory and modelling research program was undertaken to provide the necessary information for a specially developed high-performance cement grout. The results indicate that acceptable performance is likely for at least thousands of years and probably for much longer periods. The materials, which have been proven to be effective in field applications, are shown to be virtually impermeable and highly leach resistant under vault conditions. Special plasticizing additives used in the material formulation enhance the physical characteristics of the grout without detriment to its chemical durability. Neither modelling nor laboratory testing have yet provided a definitive assessment of the grout's longevity. However, none of the results of these studies has contraindicated the use of high-performance cement-based grouts in vault sealing applications. (Author) (24 figs., 6 tabs., 21 refs.)

  20. The fluid flow consequences of CO2 migration from 1000 to 600 metres upon passing the critical conditions of CO2

    NARCIS (Netherlands)

    Meer, L.G.H.; Hofstee, C.; Orlic, B.

    2009-01-01

    The minimum injection depth for the storage of CO2 is normally set at 800 metres. At and beyond this depth in the subsurface conditions exist where CO2 is in a so-called critical state. The supercritical CO2 has a viscosity comparable to that of a normal gas and a liquid-like density, Due to the

  1. China's new path

    International Nuclear Information System (INIS)

    Liu Yingling

    2008-01-01

    The recent policy tools have also consolidated and advanced traditional renewable energy industries, including hydropower and solar thermal panels, where China has already been a world leader. The technologies are comparatively simple and low-cost, and the country has developed fairly strong construction, manufacturing and installation industries for both sources. They are still dominant in China's renewable energy use, and are expected to see continuous strong growth. Hydropower accounts for about two-thirds of China's current renewable energy use. It has grown by over 8 per cent annually from 2002 to 2006, and installed capacity will reach 190 GW by 2010 and 300 GW by 2020. China also has nearly two-thirds of the world's solar hot water capacity: more than one in every ten households bathe in water heated by the sun. Such solar thermal has witnessed 20-25 per cent annual growth in recent years, with installed capacity rising from 35 million square metres in 2000 to 100 million square metres by the end of 2006. The government aims for 150 million square metres by 2010 and double that figure by 2020. A more optimistic prediction envisages 800 million square metres installed capacity by 2030, which would mean that more than half of all Chinese households would be using solar energy for water heating. Renewable energy has become a strategic industry in China. The country has more than 50 domestic wind turbine manufacturers, over 15 major solar cell manufacturers and roughly 50 companies constructing, expanding or planning for polysilicon production lines, the key components for solar PV systems. Those two industries together employ some 80,000 people. The country also has thousands of hydropower manufacturers and engineering and design firms. More than a thousand solar water heater manufacturers throughout the country - and associated design, installation and service providers - provide some 600,000 jobs. As renewable industries are scaled up, costs will come down

  2. Alternating current losses of a 10 metre long low loss superconducting cable conductor determined from phase sensitive measurements

    International Nuclear Information System (INIS)

    Krueger Olsen, S.; Kuehle, A.; Traeholt, C.; C Rasmussen, C.; Toennesen, O.; Daeumling, M.; Rasmussen, C.N.; Willen, D.W.A.

    1999-01-01

    The ac loss of a superconducting cable conductor carrying an ac current is small. Therefore the ratio between the inductive (out-of-phase) and the resistive (in-phase) voltages over the conductor is correspondingly high. In vectorial representations this results in phase angles between the current and the voltage over the cable close to 90 degrees. This has the effect that the loss cannot be derived directly using most commercial lock-in amplifiers due to their limited absolute accuracy. However, by using two lock-in amplifiers and an appropriate correction scheme the high relative accuracy of such lock-in amplifiers can be exploited. In this paper we present the results from ac-loss measurements on a low loss 10 metre long high temperature superconducting cable conductor using such a correction scheme. Measurements were carried out with and without a compensation circuit that could reduce the inductive voltage. The 1 μV cm -1 critical current of the conductor was 3240 A at 77 K. At an rms current of 2 kA (50 Hz) the ac loss was derived to be 0.6±0.15 W m -1 . This is, to the best of our knowledge, the lowest value of ac loss of a high temperature superconducting cable conductor reported so far at these high currents. (author)

  3. Powder metallurgical high performance materials. Proceedings. Volume 2: P/M hard materials

    Energy Technology Data Exchange (ETDEWEB)

    Kneringer, G; Roedhammer, P; Wildner, H [eds.

    2001-07-01

    The proceedings of these seminars form an impressive chronicle of the continued progress in the understanding of refractory metals and cemented carbides and in their manufacture and application. There the ingenuity and assiduous work of thousands of scientists and engineers striving for progress in the field of powder metallurgy is documented in more than 2000 contributions covering some 30000 pages. The 15{sup th} Plansee Seminar was convened under the general theme 'Powder Metallurgical High Performance Materials'. Under this broadened perspective the seminar will strive to look beyond the refractory metals and cemented carbides, which remain at its focus, to novel classes of materials, such as intermetallic compounds, with potential for high temperature applications. (author)

  4. Powder metallurgical high performance materials. Proceedings. Volume 2: P/M hard materials

    International Nuclear Information System (INIS)

    Kneringer, G.; Roedhammer, P.; Wildner, H.

    2001-01-01

    The proceedings of these seminars form an impressive chronicle of the continued progress in the understanding of refractory metals and cemented carbides and in their manufacture and application. There the ingenuity and assiduous work of thousands of scientists and engineers striving for progress in the field of powder metallurgy is documented in more than 2000 contributions covering some 30000 pages. The 15 th Plansee Seminar was convened under the general theme 'Powder Metallurgical High Performance Materials'. Under this broadened perspective the seminar will strive to look beyond the refractory metals and cemented carbides, which remain at its focus, to novel classes of materials, such as intermetallic compounds, with potential for high temperature applications. (author)

  5. The high velocity, high adiabat, ``Bigfoot'' campaign and tests of indirect-drive implosion scaling

    Science.gov (United States)

    Casey, Daniel

    2017-10-01

    To achieve hotspot ignition, inertial confinement fusion (ICF) implosions must achieve high hotspot internal energy that is inertially confined by a dense shell of DT fuel. To accomplish this, implosions are designed to achieve high peak implosion velocity, good energy coupling between the hotspot and imploding shell, and high areal-density at stagnation. However, experiments have shown that achieving these simultaneously is extremely challenging, partly because of inherent tradeoffs between these three interrelated requirements. The Bigfoot approach is to intentionally trade off high convergence, and therefore areal-density, in favor of high implosion velocity and good coupling between the hotspot and shell. This is done by intentionally colliding the shocks in the DT ice layer. This results in a short laser pulse which improves hohlraum symmetry and predictability while the reduced compression improves hydrodynamic stability. The results of this campaign will be reviewed and include demonstrated low-mode symmetry control at two different hohlraum geometries (5.75 mm and 5.4 mm diameters) and at two different target scales (5.4 mm and 6.0 mm hohlraum diameters) spanning 300-430 TW in laser power and 0.8-1.7 MJ in laser energy. Results of the 10% scaling between these designs for the hohlraum and capsule will be presented. Hydrodynamic instability growth from engineering features like the capsule fill tube are currently thought to be a significant perturbation to the target performance and a major factor in reducing its performance compared to calculations. Evidence supporting this hypothesis as well as plans going forward will be presented. Ongoing experiments are attempting to measure the impact on target performance from increase in target scale, and the preliminary results will also be discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  6. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  7. RECENT ADVANCES IN ULTRA-HIGH PERFORMANCE LIQUID CHROMATOGRAPHY FOR THE ANALYSIS OF TRADITIONAL CHINESE MEDICINE

    Science.gov (United States)

    Huang, Huilian; Liu, Min; Chen, Pei

    2014-01-01

    Traditional Chinese medicine has been widely used for the prevention and treatment of various diseases for thousands of years in China. Ultra-high performance liquid chromatography (UHPLC) is a relatively new technique offering new possibilities. This paper reviews recent developments in UHPLC in the separation and identification, fingerprinting, quantification, and metabolism of traditional Chinese medicine. Recently, the combination of UHPLC with MS has improved the efficiency of the analysis of these materials. PMID:25045170

  8. DEVELOPMENT OF HIGH-RISE CONSTRUCTION IN THE CITIES WITH POPULATION FROM 250 TO 500 THOUSAND INHABITANTS (on the example of the cities of the Ural Federal District

    Directory of Open Access Journals (Sweden)

    Olga Mikhaylovna Shentsova

    2017-09-01

    Full Text Available In article history of construction of high-rise buildings, features of perception of high-rise buildings in the urban environment what factors influence the choice of the site of high-rise buildings are considered. Such concepts as “skyscraper”, “a high-rise dominant” reveal. Also, the analysis of existence of high-rise buildings of the large cities with population from 250 to 500 thousand residents of the Ural Federal District is provided: Barrow, Nizhny Tagil, Nizhnevartovsk, Surgut and Magnitogorsk. And also the analysis of a town-planning situation of Magnitogorsk where the possible, most probable and offered places for an arrangement of high-rise buildings on crossing or end of axes of streets are revealed is given. Work purpose – the analysis of high-rise construction in the large cities of the Ural Federal District with population from 250 to 500 thousand inhabitants. Method or methodology of carrying out work: in article methods of the theoretical and visual analysis, observation, and also studying literary and the Internet of sources were used. Results: the systematized theoretical material in the field of architecture and town planning of the cities of the Ural Federal District is received. Scope of results: the received results can be applied in the field of architectural education and practical architectural activities.

  9. Del funcionalismo industrial al de servicios: ¿la nueva utopía de la metrópoli postindustrial del valle de México?

    Directory of Open Access Journals (Sweden)

    Blanca Ramírez

    2006-05-01

    Full Text Available El objetivo de este ensayo es responder algunas preguntas que han sido parte de viejas y nuevas reflexiones sobre la Metrópoli del Valle de México, y descubrir las nuevas tendencias que percibimos en su desarrollo. Partimos de suponer que hay confusión en la forma como se le nombra y que la visión de futuro sobre su transformación en el mediano y largo plazo pasa por un cambio importante de la función industrializadora que el modelo de sustitución de importaciones impuso a la ciudad, por otro de servicios y patrimonialista que le impone la visión postindustrial en que se ve inmersa en la actualidad. En esta transformación resalta la importancia de la periferia, dada la ubicación privilegiada que tiene en cuanto al patrimonio natural y cultural que le es propio, posibilitando así su contribución para alcanzar la sustentabilidad de la metrópoli

  10. Detecting differential protein expression in large-scale population proteomics

    Energy Technology Data Exchange (ETDEWEB)

    Ryu, Soyoung; Qian, Weijun; Camp, David G.; Smith, Richard D.; Tompkins, Ronald G.; Davis, Ronald W.; Xiao, Wenzhong

    2014-06-17

    Mass spectrometry-based high-throughput quantitative proteomics shows great potential in clinical biomarker studies, identifying and quantifying thousands of proteins in biological samples. However, methods are needed to appropriately handle issues/challenges unique to mass spectrometry data in order to detect as many biomarker proteins as possible. One issue is that different mass spectrometry experiments generate quite different total numbers of quantified peptides, which can result in more missing peptide abundances in an experiment with a smaller total number of quantified peptides. Another issue is that the quantification of peptides is sometimes absent, especially for less abundant peptides and such missing values contain the information about the peptide abundance. Here, we propose a Significance Analysis for Large-scale Proteomics Studies (SALPS) that handles missing peptide intensity values caused by the two mechanisms mentioned above. Our model has a robust performance in both simulated data and proteomics data from a large clinical study. Because varying patients’ sample qualities and deviating instrument performances are not avoidable for clinical studies performed over the course of several years, we believe that our approach will be useful to analyze large-scale clinical proteomics data.

  11. Building and measuring a high performance network architecture

    Energy Technology Data Exchange (ETDEWEB)

    Kramer, William T.C.; Toole, Timothy; Fisher, Chuck; Dugan, Jon; Wheeler, David; Wing, William R; Nickless, William; Goddard, Gregory; Corbato, Steven; Love, E. Paul; Daspit, Paul; Edwards, Hal; Mercer, Linden; Koester, David; Decina, Basil; Dart, Eli; Paul Reisinger, Paul; Kurihara, Riki; Zekauskas, Matthew J; Plesset, Eric; Wulf, Julie; Luce, Douglas; Rogers, James; Duncan, Rex; Mauth, Jeffery

    2001-04-20

    Once a year, the SC conferences present a unique opportunity to create and build one of the most complex and highest performance networks in the world. At SC2000, large-scale and complex local and wide area networking connections were demonstrated, including large-scale distributed applications running on different architectures. This project was designed to use the unique opportunity presented at SC2000 to create a testbed network environment and then use that network to demonstrate and evaluate high performance computational and communication applications. This testbed was designed to incorporate many interoperable systems and services and was designed for measurement from the very beginning. The end results were key insights into how to use novel, high performance networking technologies and to accumulate measurements that will give insights into the networks of the future.

  12. Thousands of cold anti-atoms produced at CERN

    CERN Multimedia

    2002-01-01

    The antimatter factory delivers its first major results. ATHENA has just produced thousands of anti-atoms. This is the result of techniques developed by ATRAP and ATHENA, the two collaborations aiming to study antihydrogen.

  13. Aspectos ergonômicos e estatísticos no projeto de um carro do metrô

    Directory of Open Access Journals (Sweden)

    Costa Neto Pedro Luiz de Oliveira

    2002-01-01

    Full Text Available Este artigo descreve experimentos realizados como parte de um trabalho envolvendo aspectos do projeto de um carro para uma nova linha do Metrô de São Paulo. Nele são considerados aspectos ergonômicos relacionados com a posição de barras de segurança e assentos, bem como aspectos estatísticos relacionados com o fluxo de passageiros entrando e saindo do carro na estação. Nesse estudo, é de particular interesse o uso da técnica do quadrado latino na análise de regressão múltipla, como forma de reduzir o porte do experimento.

  14. Centrifuge modelling - migration of radionuclides from engineered trenches

    International Nuclear Information System (INIS)

    Dean, E.T.R.; Schofield, A.N.

    1991-12-01

    This report provides an overview of some centrifuge small-scale physical model tests and 1g experimental and theoretical work relating to the sub-surface migration of a model pollutant (sodium chloride) from a notional prototype surface landfill of width 25 metres and depth 3 metres cut into a 20 metre deep layer of nominally uniform soil overlying a more permeable base layer. An introduction is given to the application of geotechnical centrifuge modelling techniques to pollutant migration studies. Experiments performed at 1/100th scale using the Cambridge 10 metre diameter Geotechnical Beam Centrifuge simulating transport through silt over prototype time periods of around 35 years, are summarised. Comparisons of data with calculations using early versions of the POLLUTE and MIGRATE computer codes are presented. An experiment at 1/400th scale using the new Cambridge Geotechnical Drum Centrifuge, involving transport through clay over a prototype time period of around 1000 years, is described. Potential future uses of centrifuge modelling techniques to simulate long-term migration through more complex hydrological environments are also discussed. (author)

  15. Towards High Performance Processing In Modern Java Based Control Systems

    CERN Document Server

    Misiowiec, M; Buttner, M

    2011-01-01

    CERN controls software is often developed on Java foundation. Some systems carry out a combination of data, network and processor intensive tasks within strict time limits. Hence, there is a demand for high performing, quasi real time solutions. Extensive prototyping of the new CERN monitoring and alarm software required us to address such expectations. The system must handle dozens of thousands of data samples every second, along its three tiers, applying complex computations throughout. To accomplish the goal, a deep understanding of multithreading, memory management and interprocess communication was required. There are unexpected traps hidden behind an excessive use of 64 bit memory or severe impact on the processing flow of modern garbage collectors. Tuning JVM configuration significantly affects the execution of the code. Even more important is the amount of threads and the data structures used between them. Accurately dividing work into independent tasks might boost system performance. Thorough profili...

  16. Quasi-instantaneous and Long-term Deformations of High-Performance Concrete with Some Related Properties

    OpenAIRE

    Persson, Bertil

    1998-01-01

    This report outlines an experimental and numerical study on quasi-instantaneous and long-term deformations of High-Performance Concrete, HPC, with some related properties. For this purpose about two hundred small cylinders and about one thousand cubes of eight types of HPC were cast. The age at loading varied between 18h and 28 days. Other principal properties of HPC were studied up to 4 years' age. Creep deformations of the HPC were studied from 0.01 s of loading time until 5 years' ...

  17. A Short Is Worth a Thousand Films!

    Science.gov (United States)

    Massi, Maria Palmira; Blázquez, Bettiana Andrea

    2012-01-01

    The importance of visual input in the contemporary ELT classroom is such that it is commonplace to use audiovisual elements provided by pictures, films, clips and the like. The power of images is unquestionable, and as the old saying goes, an image is worth a thousand words. Following this line of reasoning, the objective of this article is to…

  18. Complex three dimensional modelling of porous media using high performance computing and multi-scale incompressible approach

    Science.gov (United States)

    Martin, R.; Orgogozo, L.; Noiriel, C. N.; Guibert, R.; Golfier, F.; Debenest, G.; Quintard, M.

    2013-05-01

    In the context of biofilm growth in porous media, we developed high performance computing tools to study the impact of biofilms on the fluid transport through pores of a solid matrix. Indeed, biofilms are consortia of micro-organisms that are developing in polymeric extracellular substances that are generally located at a fluid-solid interfaces like pore interfaces in a water-saturated porous medium. Several applications of biofilms in porous media are encountered for instance in bio-remediation methods by allowing the dissolution of organic pollutants. Many theoretical studies have been done on the resulting effective properties of these modified media ([1],[2], [3]) but the bio-colonized porous media under consideration are mainly described following simplified theoretical media (stratified media, cubic networks of spheres ...). Therefore, recent experimental advances have provided tomography images of bio-colonized porous media which allow us to observe realistic biofilm micro-structures inside the porous media [4]. To solve closure system of equations related to upscaling procedures in realistic porous media, we solve the velocity field of fluids through pores on complex geometries that are described with a huge number of cells (up to billions). Calculations are made on a realistic 3D sample geometry obtained by X micro-tomography. Cell volumes are coming from a percolation experiment performed to estimate the impact of precipitation processes on the properties of a fluid transport phenomena in porous media [5]. Average permeabilities of the sample are obtained from velocities by using MPI-based high performance computing on up to 1000 processors. Steady state Stokes equations are solved using finite volume approach. Relaxation pre-conditioning is introduced to accelerate the code further. Good weak or strong scaling are reached with results obtained in hours instead of weeks. Factors of accelerations of 20 up to 40 can be reached. Tens of geometries can now be

  19. High performance in software development

    CERN Multimedia

    CERN. Geneva; Haapio, Petri; Liukkonen, Juha-Matti

    2015-01-01

    What are the ingredients of high-performing software? Software development, especially for large high-performance systems, is one the most complex tasks mankind has ever tried. Technological change leads to huge opportunities but challenges our old ways of working. Processing large data sets, possibly in real time or with other tight computational constraints, requires an efficient solution architecture. Efficiency requirements span from the distributed storage and large-scale organization of computation and data onto the lowest level of processor and data bus behavior. Integrating performance behavior over these levels is especially important when the computation is resource-bounded, as it is in numerics: physical simulation, machine learning, estimation of statistical models, etc. For example, memory locality and utilization of vector processing are essential for harnessing the computing power of modern processor architectures due to the deep memory hierarchies of modern general-purpose computers. As a r...

  20. 47 CFR 52.20 - Thousands-block number pooling.

    Science.gov (United States)

    2010-10-01

    ... aligned with any particular telecommunication industry segment, and shall comply with the same neutrality... 47 Telecommunication 3 2010-10-01 2010-10-01 false Thousands-block number pooling. 52.20 Section 52.20 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES...

  1. Hacia la gestión ambiental de residuos sólidos en las metrópolis de América Latina

    Directory of Open Access Journals (Sweden)

    Luz Ángela Rodríguez Escobar

    2002-11-01

    Full Text Available La contaminación generada por la acumulación de residuos sólidos está presente en todas las metrópolis de América Latina, afectando el ecosistema. Dicha contaminación es causada por la población y su aglomeración en zonas urbanas. Los datos estadísticos de las metrópolis latinoamericanas permiten establecer una relación directa entre población y acumulación de residuos sólidos y también entre nivel de ingresos y generación de residuos, mostrando que la relación población-residuos sólidos está mediada por variables económicas y culturales. La información de generación de basura por persona y su respectivo nivel socioeconómico permite establecer diferencias de cantidad y calidad de los residuos generados por individuos de distinto nivel socioeconómico, que a su vez se asocian a diferentes estilos de vida y patrones de consumo. Así, la producción de basura es potenciada por la dinámica de producción y consumo y por la dinámica demográfica, siendo un efecto no esperado de ambas, que convierte los residuos sólidos en un subproducto del modelo de desarrollo y la dinámica demográfica. En el escenario planteado, el problema ambiental de los residuos sólidos en .las metrópolis de América Latina aparece como irresoluto y la decisión de resolverlo de manera fundamental pasa por cambiar el modelo de desarrollo y el comportamiento de la sociedad. Una solución menos extrema consiste en hacer un manejo integral de los residuos sólidos a través de políticas de gestión integral.

  2. Nicolau Sevcenko, Orfeo extático en la metrópolis: San Pablo, sociedad y cultura en los febriles años veinte

    Directory of Open Access Journals (Sweden)

    Hernán Morales

    2016-03-01

    Full Text Available Reseña bibliográfica del libro de Nicolau Sevcenko, Orfeo extático en la metrópolis: San Pablo, sociedad y cultura en los febriles años veinte Traducción de Ada Solari, Bernal, Universidad Nacional de Quilmes, 2013, 417 pp.

  3. Development of a performance anxiety scale for music students.

    Science.gov (United States)

    Çirakoğlu, Okan Cem; Şentürk, Gülce Çoskun

    2013-12-01

    In the present research, the Performance Anxiety Scale for Music Students (PASMS) was developed in three successive studies. In Study 1, the factor structure of PASMS was explored and three components were found: fear of stage (FES), avoidance (AVD) and symptoms (SMP). The internal consistency of the subscales of PASMS, which consisted of 27 items, varied between 0.89 and 0.91. The internal consistency for the whole scale was found to be 0.95. The correlations among PASMS and other anxiety-related measures were significant and in the expected direction, indicating that the scale has convergent validity. The construct validity of the scale was assessed in Study 2 by confirmatory factor analysis. After several revisions, the final tested model achieved acceptable fits. In Study 3, the 14-day test-retest reliability of the final 24-item version of PASMS was tested and found to be extremely high (0.95). In all three studies, the whole scale and subscale scores of females were significantly higher than for males.

  4. Mechanical Constraints on Flight at High Elevation Decrease Maneuvering Performance of Hummingbirds.

    Science.gov (United States)

    Segre, Paolo S; Dakin, Roslyn; Read, Tyson J G; Straw, Andrew D; Altshuler, Douglas L

    2016-12-19

    High-elevation habitats offer ecological advantages including reduced competition, predation, and parasitism [1]. However, flying organisms at high elevation also face physiological challenges due to lower air density and oxygen availability [2]. These constraints are expected to affect the flight maneuvers that are required to compete with rivals, capture prey, and evade threats [3-5]. To test how individual maneuvering performance is affected by elevation, we measured the free-flight maneuvers of male Anna's hummingbirds in a large chamber translocated to a high-elevation site and then measured their performance at low elevation. We used a multi-camera tracking system to identify thousands of maneuvers based on body position and orientation [6]. At high elevation, the birds' translational velocities, accelerations, and rotational velocities were reduced, and they used less demanding turns. To determine how mechanical and metabolic constraints independently affect performance, we performed a second experiment to evaluate flight maneuvers in an airtight chamber infused with either normoxic heliox, to lower air density, or nitrogen, to lower oxygen availability. The hypodense treatment caused the birds to reduce their accelerations and rotational velocities, whereas the hypoxic treatment had no significant effect on maneuvering performance. Collectively, these experiments reveal how aerial maneuvering performance changes with elevation, demonstrating that as birds move up in elevation, air density constrains their maneuverability prior to any influence of oxygen availability. Our results support the hypothesis that changes in competitive ability at high elevations are the result of mechanical limits to flight performance [7]. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. High spatial and spectral resolution measurements of Jupiter's auroral regions using Gemini-North-TEXES

    Science.gov (United States)

    Sinclair, J. A.; Orton, G. S.; Greathouse, T. K.; Lacy, J.; Giles, R.; Fletcher, L. N.; Vogt, M.; Irwin, P. G.

    2017-12-01

    Jupiter exhibits auroral emission at a multitude of wavelengths. Auroral emission at X-ray, ultraviolet and near-infrared wavelengths demonstrate the precipitation of ion and electrons in Jupiter's upper atmosphere, at altitudes exceeding 250 km above the 1-bar level. Enhanced mid-infrared emission of CH4, C2H2, C2H4 and further hydrocarbons is also observed coincident with Jupiter's auroral regions. Retrieval analyses of infrared spectra from IRTF-TEXES (Texas Echelon Cross Echelle Spectrograph on NASA's Infrared Telescope Facility) indicate strong heating at the 1-mbar level and evidence of ion-neutral chemistry, which enriches the abundances of unsaturated hydrocarbons (Sinclair et al., 2017b, doi:10.1002/2017GL073529, Sinclair et al., 2017c (under review)). The extent to which these phenomena in the stratosphere are correlated and coupled physically with the shorter-wavelength auroral emission originating from higher altitudes has been a challenge due to the limited spatial resolution available on the IRTF. Smaller-scale features observed in the near-infrared and ultraviolet emission, such as the main `oval', transient `swirls' and dusk-active regions within the main oval (e.g. Stallard et al., 2014, doi:10.1016/j/Icarus.2015.12.044, Nichols et al., 2017, doi: 10.1002/2017GL073029) are potentially being blurred in the mid-infrared by the diffraction-limited resolution (0.7") of IRTF's 3-metre primary aperture. However, on March 17-19th 2017, we obtained spectral measurements of H2 S(1), CH4, C2H2, C2H4 and C2H6 emission of Jupiter's high latitudes using TEXES on Gemini-North, which has a 8-metre primary aperture. This rare opportunity combines the superior spectral resolving power of TEXES and the high spatial resolution provided by Gemini-North's 8-metre aperture. We will perform a retrieval analyses to determine the 3D distributions of temperature, C2H2, C2H4 and C2H6. The morphology will be compared with near-contemporaneous measurements of H3+ emission from

  6. Ten Thousand Years of Solitude

    Energy Technology Data Exchange (ETDEWEB)

    Benford, G. (Los Alamos National Lab., NM (USA) California Univ., Irvine, CA (USA). Dept. of Physics); Kirkwood, C.W. (Los Alamos National Lab., NM (USA) Arizona State Univ., Tempe, AZ (USA). Coll. of Business Administration); Harry, O. (Los Alamos National Lab., NM (USA)); Pasqualetti, M.J. (Los Alamos National Lab., NM (USA) Arizona State Univ., Tempe, AZ (USA))

    1991-03-01

    This report documents the authors work as an expert team advising the US Department of Energy on modes of inadvertent intrusion over the next 10,000 years into the Waste Isolation Pilot Project (WIPP) nuclear waste repository. Credible types of potential future accidental intrusion into the WIPP are estimated as a basis for creating warning markers to prevent inadvertent intrusion. A six-step process is used to structure possible scenarios for such intrusion, and it is concluded that the probability of inadvertent intrusion into the WIPP repository over the next ten thousand years lies between one and twenty-five percent. 3 figs., 5 tabs.

  7. Ten Thousand Years of Solitude?

    International Nuclear Information System (INIS)

    Benford, G.; Pasqualetti, M.J.

    1991-03-01

    This report documents the authors work as an expert team advising the US Department of Energy on modes of inadvertent intrusion over the next 10,000 years into the Waste Isolation Pilot Project (WIPP) nuclear waste repository. Credible types of potential future accidental intrusion into the WIPP are estimated as a basis for creating warning markers to prevent inadvertent intrusion. A six-step process is used to structure possible scenarios for such intrusion, and it is concluded that the probability of inadvertent intrusion into the WIPP repository over the next ten thousand years lies between one and twenty-five percent. 3 figs., 5 tabs

  8. The long-term effect of social comparison on academic performance

    NARCIS (Netherlands)

    Wehrens, Maike J. P. W.; Kuyper, Hans; Dijkstra, Pieternel; Buunk, Abraham P.; van der Werf, Margaretha P. C.

    2010-01-01

    The present study was part of a large-scale cohort study among several thousand students in the Netherlands. The purpose of the study was to investigate the long-term effects of comparison choice, i.e., comparison with a target performing better or worse than oneself, and academic comparative

  9. Mud Volcanoes - Analogs to Martian Cones and Domes (by the Thousands!)

    Science.gov (United States)

    Allen, Carlton C.; Oehler, Dorothy

    2010-01-01

    Mud volcanoes are mounds formed by low temperature slurries of gas, liquid, sediments and rock that erupt to the surface from depths of meters to kilometers. They are common on Earth, with estimates of thousands onshore and tens of thousands offshore. Mud volcanoes occur in basins with rapidly-deposited accumulations of fine-grained sediments. Such settings are ideal for concentration and preservation of organic materials, and mud volcanoes typically occur in sedimentary basins that are rich in organic biosignatures. Domes and cones, cited as possible mud volcanoes by previous authors, are common on the northern plains of Mars. Our analysis of selected regions in southern Acidalia Planitia has revealed over 18,000 such features, and we estimate that more than 40,000 occur across the area. These domes and cones strongly resemble terrestrial mud volcanoes in size, shape, morphology, associated flow structures and geologic setting. Geologic and mineralogic arguments rule out alternative formation mechanisms involving lava, ice and impacts. We are studying terrestrial mud volcanoes from onshore and submarine locations. The largest concentration of onshore features is in Azerbaijan, near the western edge of the Caspian Sea. These features are typically hundreds of meters to several kilometers in diameter, and tens to hundreds of meters in height. Satellite images show spatial densities of 20 to 40 eruptive centers per 1000 square km. Many of the features remain active, and fresh mud flows as long as several kilometers are common. A large field of submarine mud volcanoes is located in the Gulf of Cadiz, off the Atlantic coasts of Morocco and Spain. High-resolution sonar bathymetry reveals numerous km-scale mud volcanoes, hundreds of meters in height. Seismic profiles demonstrate that the mud erupts from depths of several hundred meters. These submarine mud volcanoes are the closest morphologic analogs yet found to the features in Acidalia Planitia. We are also conducting

  10. Scaling of neck performance requirements in side impacts

    NARCIS (Netherlands)

    Wismans, J.S.H.M.; Meijer, R.; Rodarius, C.; Been, B.W.

    2008-01-01

    Neck biofidelity performance requirements for different sized crash dummies and human body computer models are usually based on scaling of performance requirements derived for a 50th percentile body size. The objective of this study is to investigate the validity of the currently used scaling laws

  11. High performance flexible CMOS SOI FinFETs

    KAUST Repository

    Fahad, Hossain M.; Sevilla, Galo T.; Ghoneim, Mohamed T.; Hussain, Muhammad Mustafa

    2014-01-01

    We demonstrate the first ever CMOS compatible soft etch back based high performance flexible CMOS SOI FinFETs. The move from planar to non-planar FinFETs has enabled continued scaling down to the 14 nm technology node. This has been possible due

  12. High Performance Nanofiltration Membrane for Effective Removal of Perfluoroalkyl Substances at High Water Recovery.

    Science.gov (United States)

    Boo, Chanhee; Wang, Yunkun; Zucker, Ines; Choo, Youngwoo; Osuji, Chinedum O; Elimelech, Menachem

    2018-05-31

    We demonstrate the fabrication of a loose, negatively charged nanofiltration (NF) membrane with tailored selectivity for the removal of perfluoroalkyl substances with reduced scaling potential. A selective polyamide layer was fabricated on top of a polyethersulfone support via interfacial polymerization of trimesoyl chloride and a mixture of piperazine and bipiperidine. Incorporating high molecular weight bipiperidine during the interfacial polymerization enables the formation of a loose, nanoporous selective layer structure. The fabricated NF membrane possessed a negative surface charge and had a pore diameter of ~1.2 nm, much larger than a widely used commercial NF membrane (i.e., NF270 with pore diameter of ~0.8 nm). We evaluated the performance of the fabricated NF membrane for the rejection of different salts (i.e., NaCl, CaCl2, and Na2SO4) and perfluorooctanoic acid (PFOA). The fabricated NF membrane exhibited a high retention of PFOA (~90%) while allowing high passage of scale-forming cations (i.e., calcium). We further performed gypsum scaling experiments to demonstrate lower scaling potential of the fabricated loose porous NF membrane compared to NF membranes having a dense selective layer under solution conditions simulating high water recovery. Our results demonstrate that properly designed NF membranes are a critical component of a high recovery NF system, which provide an efficient and sustainable solution for remediation of groundwater contaminated with perfluoroalkyl substances.

  13. Designing a High Performance Parallel Personal Cluster

    OpenAIRE

    Kapanova, K. G.; Sellier, J. M.

    2016-01-01

    Today, many scientific and engineering areas require high performance computing to perform computationally intensive experiments. For example, many advances in transport phenomena, thermodynamics, material properties, computational chemistry and physics are possible only because of the availability of such large scale computing infrastructures. Yet many challenges are still open. The cost of energy consumption, cooling, competition for resources have been some of the reasons why the scientifi...

  14. WImpiBLAST: web interface for mpiBLAST to help biologists perform large-scale annotation using high performance computing.

    Directory of Open Access Journals (Sweden)

    Parichit Sharma

    Full Text Available The function of a newly sequenced gene can be discovered by determining its sequence homology with known proteins. BLAST is the most extensively used sequence analysis program for sequence similarity search in large databases of sequences. With the advent of next generation sequencing technologies it has now become possible to study genes and their expression at a genome-wide scale through RNA-seq and metagenome sequencing experiments. Functional annotation of all the genes is done by sequence similarity search against multiple protein databases. This annotation task is computationally very intensive and can take days to obtain complete results. The program mpiBLAST, an open-source parallelization of BLAST that achieves superlinear speedup, can be used to accelerate large-scale annotation by using supercomputers and high performance computing (HPC clusters. Although many parallel bioinformatics applications using the Message Passing Interface (MPI are available in the public domain, researchers are reluctant to use them due to lack of expertise in the Linux command line and relevant programming experience. With these limitations, it becomes difficult for biologists to use mpiBLAST for accelerating annotation. No web interface is available in the open-source domain for mpiBLAST. We have developed WImpiBLAST, a user-friendly open-source web interface for parallel BLAST searches. It is implemented in Struts 1.3 using a Java backbone and runs atop the open-source Apache Tomcat Server. WImpiBLAST supports script creation and job submission features and also provides a robust job management interface for system administrators. It combines script creation and modification features with job monitoring and management through the Torque resource manager on a Linux-based HPC cluster. Use case information highlights the acceleration of annotation analysis achieved by using WImpiBLAST. Here, we describe the WImpiBLAST web interface features and architecture

  15. WImpiBLAST: web interface for mpiBLAST to help biologists perform large-scale annotation using high performance computing.

    Science.gov (United States)

    Sharma, Parichit; Mantri, Shrikant S

    2014-01-01

    The function of a newly sequenced gene can be discovered by determining its sequence homology with known proteins. BLAST is the most extensively used sequence analysis program for sequence similarity search in large databases of sequences. With the advent of next generation sequencing technologies it has now become possible to study genes and their expression at a genome-wide scale through RNA-seq and metagenome sequencing experiments. Functional annotation of all the genes is done by sequence similarity search against multiple protein databases. This annotation task is computationally very intensive and can take days to obtain complete results. The program mpiBLAST, an open-source parallelization of BLAST that achieves superlinear speedup, can be used to accelerate large-scale annotation by using supercomputers and high performance computing (HPC) clusters. Although many parallel bioinformatics applications using the Message Passing Interface (MPI) are available in the public domain, researchers are reluctant to use them due to lack of expertise in the Linux command line and relevant programming experience. With these limitations, it becomes difficult for biologists to use mpiBLAST for accelerating annotation. No web interface is available in the open-source domain for mpiBLAST. We have developed WImpiBLAST, a user-friendly open-source web interface for parallel BLAST searches. It is implemented in Struts 1.3 using a Java backbone and runs atop the open-source Apache Tomcat Server. WImpiBLAST supports script creation and job submission features and also provides a robust job management interface for system administrators. It combines script creation and modification features with job monitoring and management through the Torque resource manager on a Linux-based HPC cluster. Use case information highlights the acceleration of annotation analysis achieved by using WImpiBLAST. Here, we describe the WImpiBLAST web interface features and architecture, explain design

  16. Recovery Act - CAREER: Sustainable Silicon -- Energy-Efficient VLSI Interconnect for Extreme-Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Patrick [Oregon State Univ., Corvallis, OR (United States)

    2014-01-31

    The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou­ sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on­ chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.

  17. High performance nanostructured Silicon heterojunction for water splitting on large scales

    KAUST Repository

    Bonifazi, Marcella

    2017-11-02

    In past years the global demand for energy has been increasing steeply, as well as the awareness that new sources of clean energy are essential. Photo-electrochemical devices (PEC) for water splitting applications have stirred great interest, and different approach has been explored to improve the efficiency of these devices and to avoid optical losses at the interfaces with water. These include engineering materials and nanostructuring the device\\'s surfaces [1]-[2]. Despite the promising initial results, there are still many drawbacks that needs to be overcome to reach large scale production with optimized performances [3]. We present a new device that relies on the optimization of the nanostructuring process that exploits suitably disordered surfaces. Additionally, this device could harvest light on both sides to efficiently gain and store the energy to keep the photocatalytic reaction active.

  18. High performance nanostructured Silicon heterojunction for water splitting on large scales

    KAUST Repository

    Bonifazi, Marcella; Fu, Hui-chun; He, Jr-Hau; Fratalocchi, Andrea

    2017-01-01

    In past years the global demand for energy has been increasing steeply, as well as the awareness that new sources of clean energy are essential. Photo-electrochemical devices (PEC) for water splitting applications have stirred great interest, and different approach has been explored to improve the efficiency of these devices and to avoid optical losses at the interfaces with water. These include engineering materials and nanostructuring the device's surfaces [1]-[2]. Despite the promising initial results, there are still many drawbacks that needs to be overcome to reach large scale production with optimized performances [3]. We present a new device that relies on the optimization of the nanostructuring process that exploits suitably disordered surfaces. Additionally, this device could harvest light on both sides to efficiently gain and store the energy to keep the photocatalytic reaction active.

  19. STATISTICAL EVALUATION OF SMALL SCALE MIXING DEMONSTRATION SAMPLING AND BATCH TRANSFER PERFORMANCE - 12093

    Energy Technology Data Exchange (ETDEWEB)

    GREER DA; THIEN MG

    2012-01-12

    The ability to effectively mix, sample, certify, and deliver consistent batches of High Level Waste (HLW) feed from the Hanford Double Shell Tanks (DST) to the Waste Treatment and Immobilization Plant (WTP) presents a significant mission risk with potential to impact mission length and the quantity of HLW glass produced. DOE's Tank Operations Contractor, Washington River Protection Solutions (WRPS) has previously presented the results of mixing performance in two different sizes of small scale DSTs to support scale up estimates of full scale DST mixing performance. Currently, sufficient sampling of DSTs is one of the largest programmatic risks that could prevent timely delivery of high level waste to the WTP. WRPS has performed small scale mixing and sampling demonstrations to study the ability to sufficiently sample the tanks. The statistical evaluation of the demonstration results which lead to the conclusion that the two scales of small DST are behaving similarly and that full scale performance is predictable will be presented. This work is essential to reduce the risk of requiring a new dedicated feed sampling facility and will guide future optimization work to ensure the waste feed delivery mission will be accomplished successfully. This paper will focus on the analytical data collected from mixing, sampling, and batch transfer testing from the small scale mixing demonstration tanks and how those data are being interpreted to begin to understand the relationship between samples taken prior to transfer and samples from the subsequent batches transferred. An overview of the types of data collected and examples of typical raw data will be provided. The paper will then discuss the processing and manipulation of the data which is necessary to begin evaluating sampling and batch transfer performance. This discussion will also include the evaluation of the analytical measurement capability with regard to the simulant material used in the demonstration tests. The

  20. Ultimate Scaling of High-κ Gate Dielectrics: Higher-κ or Interfacial Layer Scavenging?

    Directory of Open Access Journals (Sweden)

    Takashi Ando

    2012-03-01

    Full Text Available Current status and challenges of aggressive equivalent-oxide-thickness (EOT scaling of high-κ gate dielectrics via higher-κ ( > 20 materials and interfacial layer (IL scavenging techniques are reviewed. La-based higher-κ materials show aggressive EOT scaling (0.5–0.8 nm, but with effective workfunction (EWF values suitable only for n-type field-effect-transistor (FET. Further exploration for p-type FET-compatible higher-κ materials is needed. Meanwhile, IL scavenging is a promising approach to extend Hf-based high-κ dielectrics to future nodes. Remote IL scavenging techniques enable EOT scaling below 0.5 nm. Mobility-EOT trends in the literature suggest that short-channel performance improvement is attainable with aggressive EOT scaling via IL scavenging or La-silicate formation. However, extreme IL scaling (e.g., zero-IL is accompanied by loss of EWF control and with severe penalty in reliability. Therefore, highly precise IL thickness control in an ultra-thin IL regime ( < 0.5 nm will be the key technology to satisfy both performance and reliability requirements for future CMOS devices.

  1. Gram-scale production of B, N co-doped graphene-like carbon for high performance supercapacitor electrodes

    Science.gov (United States)

    Chen, Zhuo; Hou, Liqiang; Cao, Yan; Tang, Yushu; Li, Yongfeng

    2018-03-01

    Boron and nitrogen co-doped graphene-like carbon (BNC) with a gram scale was synthesized via a two-step method including a ball-milling process and a calcination process and used as electrode materials for supercapacitors. High surface area and abundant active sites of graphene-like carbon were created by the ball-milling process. Interestingly, the nitrogen atoms are doped in carbon matrix without any other N sources except for air. The textual and chemical properties can be easily tuned by changing the calcination temperature, and at 900 oC the BNC with a high surface area (802.35 m2/g), a high boron content (2.19 at%), a hierarchical pore size distribution and a relatively high graphitic degree was obtained. It shows an excellent performance of high specific capacitance retention about 78.2% at high current density (199 F/g at 100 A/g) of the initial capacitance (254 F/g at 0.25 A/g) and good cycling stability (90% capacitance retention over 1000 cycles at 100 A/g) measured in a three-electrode system. Furthermore, in a two-electrode system, a specific capacitance of 225 F/g at 0.25 A/g and a good cycling stability (93% capacitance retention over 20,000 cycles at 25 A/g) were achieved by using BNC as electrodes. The strategy of synthesis is facile and effective to fabricate multi-doped graphene-like carbon for promising candidates as electrode materials in supercapacitors.

  2. Science Outreach for the Thousands: Coe College's Playground of Science

    Science.gov (United States)

    Watson, D. E.; Franke, M.; Affatigato, M.; Feller, S.

    2011-12-01

    Coe College is a private liberal arts college nestled in the northeast quadrant of Cedar Rapids, IA. Coe takes pride in the outreach it does in the local community. The sciences at Coe find enjoyment in educating the children and families of this community through a diverse set of venues; from performing science demonstrations for children at Cedar Rapids' Fourth of July Freedom Festival to hosting summer forums and talks to invigorate the minds of its more mature audiences. Among these events, the signature event of the year is the Coe Playground of Science. On the last Thursday of October, before Halloween, the science departments at Coe invite nearly two thousand children from pre elementary to high school ages, along with their parents to participate in a night filled with science demos, haunted halls, and trick-or-treating for more than just candy. The demonstrations are performed by professors and students alike from a raft of cooperative departments including physics, chemistry, biology, math, computer science, nursing, ROTC, and psychology. This event greatly strengthens the relationships between institution members and community members. The sciences at Coe understand the importance of imparting the thrill and hunger for exploration and discovery into the future generations. More importantly they recognize that this cannot start and end at the collegiate level, but the American public must be reached at younger ages and continue to be encouraged beyond the college experience. The Playground of Science unites these two groups under the common goal of elevating scientific interest in the American people.

  3. HTMT-class Latency Tolerant Parallel Architecture for Petaflops Scale Computation

    Science.gov (United States)

    Sterling, Thomas; Bergman, Larry

    2000-01-01

    semiconductor logic. Wave Division Multiplexing optical communications can approach a peak per fiber bandwidth of 1 Tbps and the new Data Vortex network topology employing this technology can connect tens of thousands of ports providing a bi-section bandwidth on the order of a Petabyte per second with latencies well below 100 nanoseconds, even under heavy loads. Processor-in-Memory (PIM) technology combines logic and memory on the same chip exposing the internal bandwidth of the memory row buffers at low latency. And holographic storage photorefractive storage technologies provide high-density memory with access a thousand times faster than conventional disk technologies. Together these technologies enable a new class of shared memory system architecture with a peak performance in the range of a Petaflops but size and power requirements comparable to today's largest Teraflops scale systems. To achieve high-sustained performance, HTMT combines an advanced multithreading processor architecture with a memory-driven coarse-grained latency management strategy called "percolation", yielding high efficiency while reducing the much of the parallel programming burden. This paper will present the basic system architecture characteristics made possible through this series of advanced technologies and then give a detailed description of the new percolation approach to runtime latency management.

  4. Local and Regional Impacts of Pollution on Coral Reefs along the Thousand Islands North of the Megacity Jakarta, Indonesia

    Science.gov (United States)

    Baum, Gunilla; Januar, Hedi I.; Ferse, Sebastian C. A.; Kunzmann, Andreas

    2015-01-01

    Worldwide, coral reefs are challenged by multiple stressors due to growing urbanization, industrialization and coastal development. Coral reefs along the Thousand Islands off Jakarta, one of the largest megacities worldwide, have degraded dramatically over recent decades. The shift and decline in coral cover and composition has been extensively studied with a focus on large-scale gradients (i.e. regional drivers), however special focus on local drivers in shaping spatial community composition is still lacking. Here, the spatial impact of anthropogenic stressors on local and regional scales on coral reefs north of Jakarta was investigated. Results indicate that the direct impact of Jakarta is mainly restricted to inshore reefs, separating reefs in Jakarta Bay from reefs along the Thousand Islands further north. A spatial patchwork of differentially degraded reefs is present along the islands as a result of localized anthropogenic effects rather than regional gradients. Pollution is the main anthropogenic stressor, with over 80% of variation in benthic community composition driven by sedimentation rate, NO2, PO4 and Chlorophyll a. Thus, the spatial structure of reefs is directly related to intense anthropogenic pressure from local as well as regional sources. Therefore, improved spatial management that accounts for both local and regional stressors is needed for effective marine conservation. PMID:26378910

  5. Local and Regional Impacts of Pollution on Coral Reefs along the Thousand Islands North of the Megacity Jakarta, Indonesia.

    Science.gov (United States)

    Baum, Gunilla; Januar, Hedi I; Ferse, Sebastian C A; Kunzmann, Andreas

    2015-01-01

    Worldwide, coral reefs are challenged by multiple stressors due to growing urbanization, industrialization and coastal development. Coral reefs along the Thousand Islands off Jakarta, one of the largest megacities worldwide, have degraded dramatically over recent decades. The shift and decline in coral cover and composition has been extensively studied with a focus on large-scale gradients (i.e. regional drivers), however special focus on local drivers in shaping spatial community composition is still lacking. Here, the spatial impact of anthropogenic stressors on local and regional scales on coral reefs north of Jakarta was investigated. Results indicate that the direct impact of Jakarta is mainly restricted to inshore reefs, separating reefs in Jakarta Bay from reefs along the Thousand Islands further north. A spatial patchwork of differentially degraded reefs is present along the islands as a result of localized anthropogenic effects rather than regional gradients. Pollution is the main anthropogenic stressor, with over 80% of variation in benthic community composition driven by sedimentation rate, NO2, PO4 and Chlorophyll a. Thus, the spatial structure of reefs is directly related to intense anthropogenic pressure from local as well as regional sources. Therefore, improved spatial management that accounts for both local and regional stressors is needed for effective marine conservation.

  6. Local and Regional Impacts of Pollution on Coral Reefs along the Thousand Islands North of the Megacity Jakarta, Indonesia.

    Directory of Open Access Journals (Sweden)

    Gunilla Baum

    Full Text Available Worldwide, coral reefs are challenged by multiple stressors due to growing urbanization, industrialization and coastal development. Coral reefs along the Thousand Islands off Jakarta, one of the largest megacities worldwide, have degraded dramatically over recent decades. The shift and decline in coral cover and composition has been extensively studied with a focus on large-scale gradients (i.e. regional drivers, however special focus on local drivers in shaping spatial community composition is still lacking. Here, the spatial impact of anthropogenic stressors on local and regional scales on coral reefs north of Jakarta was investigated. Results indicate that the direct impact of Jakarta is mainly restricted to inshore reefs, separating reefs in Jakarta Bay from reefs along the Thousand Islands further north. A spatial patchwork of differentially degraded reefs is present along the islands as a result of localized anthropogenic effects rather than regional gradients. Pollution is the main anthropogenic stressor, with over 80% of variation in benthic community composition driven by sedimentation rate, NO2, PO4 and Chlorophyll a. Thus, the spatial structure of reefs is directly related to intense anthropogenic pressure from local as well as regional sources. Therefore, improved spatial management that accounts for both local and regional stressors is needed for effective marine conservation.

  7. Melancholy in Contemporary Irish Poetry: The ‘Metre Generation’ and Mahon

    Directory of Open Access Journals (Sweden)

    Ailbhe Darcy

    2017-01-01

    Full Text Available This article explores the influence of Derek Mahon’s melancholic poetry on a younger generation of Irish poets. Drawing on Peter Schwenger’s 'The Tears of Things: Melancholy and Physical Objects' (2006, it argues that Mahon’s influential early poems deliberately provoke melancholy in order to insist upon the subject’s alienation from the world. It traces how the poets Justin Quinn and David Wheatley take on and reject aspects of Mahon’s influence, with a focus on this melancholy. Quinn rejects Mahon’s melancholy and comes to insist emphatically upon connectedness, resulting in his development of a poetics pledged to traditional forms and full rhymes. Wheatley hews fast to early Mahon’s insistence on a gap between us and the world, inflecting that gap with a keen consciousness of environmental crisis. His trajectory, in contrast with later Mahon, is towards an embrace of disjunctive modernist techniques as a means of acknowledging our disconnectedness from the world. Attending to the ways in which Quinn and Wheatley work with and against Mahon’s influence sheds light on the ‘Metre generation’ as one whose poetic inheritance enables a sophisticated and exciting use of form as a tool with which to think through the individual’s relationship to the world.

  8. Beck x Roberts: Comparativos do Diagrama do Metrô de Londres

    Directory of Open Access Journals (Sweden)

    Joaquim Redig

    2015-06-01

    Full Text Available Este artigo foi originalmente dirigido à disciplina Design e Visualização da Informação, ministrada pelo Prof. André Monat no curso de Doutorado em Design da ESDI-UERJ, Escola Superior de Desenho Industrial da Universidade do Estado do Rio de Janeiro, em 2013, a partir da proposta do professor de se examinar a pertinência da estrutura radial proposta por Maxwell J. Roberts para o design de mapas de redes de transporte. No artigo, procuro demonstrar que, por ser uma forma padrão pré-convencionada e abstrata, não valorizando a relação visual analógica com o objeto representado – a rede geo-espacial do sistema de transporte, variável para cada cidade – a estrutura radial não facilita a compreensão e o uso da rede, relativamente ao sistema ortogonal/diagonal projetado por Harry Beck para o assim chamado Diagrama do metrô londrino, que, partindo do código mais universal e ancestral de orientação que é a Rosa dos Ventos, se baseia nos eixos dos percursos. Por isso, este sistema se tornou um paradigma mundial no campo do Design de Informação, ainda não superado.

  9. Conceptual design of current lead for large scale high temperature superconducting rotating machine

    International Nuclear Information System (INIS)

    Le, T. D.; Kim, J. H.; Park, S. I.; Kim, H. M.

    2014-01-01

    High-temperature superconducting (HTS) rotating machines always require an electric current of from several hundreds to several thousand amperes to be led from outside into cold region of the field coil. Heat losses through the current leads then assume tremendous importance. Consequently, it is necessary to acquire optimal design for the leads which would achieve minimum heat loss during operation of machines for a given electrical current. In this paper, conduction cooled current lead type of 10 MW-Class HTS rotating machine will be chosen, a conceptual design will be discussed and performed relied on the least heat lost estimation between conventional metal lead and partially HTS lead. In addition, steady-state thermal characteristic of each one also is considered and illustrated.

  10. Extending 'Contact Tracing' into the Community within a 50-Metre Radius of an Index Tuberculosis Patient Using Xpert MTB/RIF in Urban, Pakistan: Did It Increase Case Detection?

    Directory of Open Access Journals (Sweden)

    Razia Fatima

    Full Text Available Currently, only 62% of incident tuberculosis (TB cases are reported to the national programme in Pakistan. Several innovative interventions are being recommended to detect the remaining 'missed' TB cases. One such intervention involved expanding contact investigation to the community using the Xpert MTB/RIF test.This was a before and after intervention study involving retrospective record review. Passive case finding and household contact investigation was routinely done in the pre-intervention period July 2011-June 2013. Four districts with a high concentration of slums were selected as intervention areas; Lahore, Rawalpindi, Faisalabad and Islamabad. Here, in the intervention period, July 2013-June 2015, contact investigation beyond household was conducted: all people staying within a radius of 50 metres (using Geographical Information System from the household of smear positive TB patients were screened for tuberculosis. Those with presumptive TB were investigated using smear microscopy and the Xpert MTB/RIF test was performed on smear negative patients. All the diagnosed TB patients were linked to TB treatment and care.A total of 783043 contacts were screened for tuberculosis: 23741(3.0% presumptive TB patients were identified of whom, 4710 (19.8% all forms and 4084(17.2% bacteriologically confirmed TB patients were detected. The contribution of Xpert MTB/RIF to bacteriologically confirmed TB patients was 7.6%. The yield among investigated presumptive child TB patients was 5.1%. The overall yield of all forms TB patients among investigated was 22.3% among household and 19.1% in close community. The intervention contributed an increase of case detection of bacteriologically confirmed tuberculosis by 6.8% and all forms TB patients by 7.9%.Community contact investigation beyond household not only detected additional TB patients but also increased TB case detection. However, further long term assessments and cost-effectiveness studies are

  11. High-performance metabolic profiling of plasma from seven mammalian species for simultaneous environmental chemical surveillance and bioeffect monitoring

    OpenAIRE

    Park, Youngja H.; Lee, Kichun; Soltow, Quinlyn A.; Strobel, Frederick H.; Brigham, Kenneth L.; Parker, Richard E.; Wilson, Mark E.; Sutliff, Roy L.; Mansfield, Keith G.; Wachtman, Lynn M.; Ziegler, Thomas R.; Jones, Dean P.

    2012-01-01

    High-performance metabolic profiling (HPMP) by Fourier-transform mass spectrometry coupled to liquid chromatography gives relative quantification of thousands of chemicals in biologic samples but has had little development for use in toxicology research. In principle, the approach could be useful to detect complex metabolic response patterns to toxicologic exposures and to detect unusual abundances or patterns of potentially toxic chemicals. As an initial study to develop these possible uses,...

  12. High performance anode for advanced Li batteries

    Energy Technology Data Exchange (ETDEWEB)

    Lake, Carla [Applied Sciences, Inc., Cedarville, OH (United States)

    2015-11-02

    The overall objective of this Phase I SBIR effort was to advance the manufacturing technology for ASI’s Si-CNF high-performance anode by creating a framework for large volume production and utilization of low-cost Si-coated carbon nanofibers (Si-CNF) for the battery industry. This project explores the use of nano-structured silicon which is deposited on a nano-scale carbon filament to achieve the benefits of high cycle life and high charge capacity without the consequent fading of, or failure in the capacity resulting from stress-induced fracturing of the Si particles and de-coupling from the electrode. ASI’s patented coating process distinguishes itself from others, in that it is highly reproducible, readily scalable and results in a Si-CNF composite structure containing 25-30% silicon, with a compositionally graded interface at the Si-CNF interface that significantly improve cycling stability and enhances adhesion of silicon to the carbon fiber support. In Phase I, the team demonstrated the production of the Si-CNF anode material can successfully be transitioned from a static bench-scale reactor into a fluidized bed reactor. In addition, ASI made significant progress in the development of low cost, quick testing methods which can be performed on silicon coated CNFs as a means of quality control. To date, weight change, density, and cycling performance were the key metrics used to validate the high performance anode material. Under this effort, ASI made strides to establish a quality control protocol for the large volume production of Si-CNFs and has identified several key technical thrusts for future work. Using the results of this Phase I effort as a foundation, ASI has defined a path forward to commercialize and deliver high volume and low-cost production of SI-CNF material for anodes in Li-ion batteries.

  13. Visualization and Data Analysis for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  14. A hybrid procedure for MSW generation forecasting at multiple time scales in Xiamen City, China.

    Science.gov (United States)

    Xu, Lilai; Gao, Peiqing; Cui, Shenghui; Liu, Chun

    2013-06-01

    Accurate forecasting of municipal solid waste (MSW) generation is crucial and fundamental for the planning, operation and optimization of any MSW management system. Comprehensive information on waste generation for month-scale, medium-term and long-term time scales is especially needed, considering the necessity of MSW management upgrade facing many developing countries. Several existing models are available but of little use in forecasting MSW generation at multiple time scales. The goal of this study is to propose a hybrid model that combines the seasonal autoregressive integrated moving average (SARIMA) model and grey system theory to forecast MSW generation at multiple time scales without needing to consider other variables such as demographics and socioeconomic factors. To demonstrate its applicability, a case study of Xiamen City, China was performed. Results show that the model is robust enough to fit and forecast seasonal and annual dynamics of MSW generation at month-scale, medium- and long-term time scales with the desired accuracy. In the month-scale, MSW generation in Xiamen City will peak at 132.2 thousand tonnes in July 2015 - 1.5 times the volume in July 2010. In the medium term, annual MSW generation will increase to 1518.1 thousand tonnes by 2015 at an average growth rate of 10%. In the long term, a large volume of MSW will be output annually and will increase to 2486.3 thousand tonnes by 2020 - 2.5 times the value for 2010. The hybrid model proposed in this paper can enable decision makers to develop integrated policies and measures for waste management over the long term. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Computational Performance of a Parallelized Three-Dimensional High-Order Spectral Element Toolbox

    Science.gov (United States)

    Bosshard, Christoph; Bouffanais, Roland; Clémençon, Christian; Deville, Michel O.; Fiétier, Nicolas; Gruber, Ralf; Kehtari, Sohrab; Keller, Vincent; Latt, Jonas

    In this paper, a comprehensive performance review of an MPI-based high-order three-dimensional spectral element method C++ toolbox is presented. The focus is put on the performance evaluation of several aspects with a particular emphasis on the parallel efficiency. The performance evaluation is analyzed with help of a time prediction model based on a parameterization of the application and the hardware resources. A tailor-made CFD computation benchmark case is introduced and used to carry out this review, stressing the particular interest for clusters with up to 8192 cores. Some problems in the parallel implementation have been detected and corrected. The theoretical complexities with respect to the number of elements, to the polynomial degree, and to communication needs are correctly reproduced. It is concluded that this type of code has a nearly perfect speed up on machines with thousands of cores, and is ready to make the step to next-generation petaflop machines.

  16. Size and shape characteristics of drumlins, derived from a large sample, and associated scaling laws

    Science.gov (United States)

    Clark, Chris D.; Hughes, Anna L. C.; Greenwood, Sarah L.; Spagnolo, Matteo; Ng, Felix S. L.

    2009-04-01

    Ice sheets flowing across a sedimentary bed usually produce a landscape of blister-like landforms streamlined in the direction of the ice flow and with each bump of the order of 10 2 to 10 3 m in length and 10 1 m in relief. Such landforms, known as drumlins, have mystified investigators for over a hundred years. A satisfactory explanation for their formation, and thus an appreciation of their glaciological significance, has remained elusive. A recent advance has been in numerical modelling of the land-forming process. In anticipation of future modelling endeavours, this paper is motivated by the requirement for robust data on drumlin size and shape for model testing. From a systematic programme of drumlin mapping from digital elevation models and satellite images of Britain and Ireland, we used a geographic information system to compile a range of statistics on length L, width W, and elongation ratio E (where E = L/ W) for a large sample. Mean L, is found to be 629 m ( n = 58,983), mean W is 209 m and mean E is 2.9 ( n = 37,043). Most drumlins are between 250 and 1000 metres in length; between 120 and 300 metres in width; and between 1.7 and 4.1 times as long as they are wide. Analysis of such data and plots of drumlin width against length reveals some new insights. All frequency distributions are unimodal from which we infer that the geomorphological label of 'drumlin' is fair in that this is a true single population of landforms, rather than an amalgam of different landform types. Drumlin size shows a clear minimum bound of around 100 m (horizontal). Maybe drumlins are generated at many scales and this is the minimum, or this value may be an indication of the fundamental scale of bump generation ('proto-drumlins') prior to them growing and elongating. A relationship between drumlin width and length is found (with r2 = 0.48) and that is approximately W = 7 L 1/2 when measured in metres. A surprising and sharply-defined line bounds the data cloud plotted in E- W

  17. A laboratory scale analysis of groundwater flow and salinity distribution in the Aespoe area

    International Nuclear Information System (INIS)

    Svensson, Urban

    1999-12-01

    This report concerns a study which was conducted for SKB. The conclusions and viewpoints presented in the report are those of the author(s) and do not necessarily coincide with those of the client. The objective of the study is to develop, calibrate and apply a numerical simulation model of the Aespoe Hard Rock Laboratory (HRL). An area of 800 x 600 centred around the HRL, gives the horizontal extent of the model. In the vertical direction the model covers the depth interval from 200 to 560 metres. The model is based on a mathematical model that includes equations for the Darcy velocities, mass conservation and salinity distribution. Gravitational effects are thus fully accounted for. A site scale groundwater model was used to generate boundary conditions for all boundaries. Transmissivities of major fracture zones are based on field data. Fractures and fracture zones with a length scale between 5 and 320 metres are accounted for by a novel method that is based on a discrete fracture network. A small background conductivity is added to account for fractures smaller than the grid size, which is metres. A calibration of the model is carried out, using field data from the Aespoe HRL. A satisfactory agreement with field data is obtained. Main results from the model include vertical and horizontal sections of flow, salinity and hydraulic head distributions for completed tunnel. A sensitivity study, where the properties of the conductivity field are modified, is also carried out. The general conclusion of the study is that the model developed can simulate the conditions at the Aespoe HRL in a realistic manner

  18. High-performance computing in accelerating structure design and analysis

    International Nuclear Information System (INIS)

    Li Zenghai; Folwell, Nathan; Ge Lixin; Guetz, Adam; Ivanov, Valentin; Kowalski, Marc; Lee, Lie-Quan; Ng, Cho-Kuen; Schussman, Greg; Stingelin, Lukas; Uplenchwar, Ravindra; Wolf, Michael; Xiao, Liling; Ko, Kwok

    2006-01-01

    Future high-energy accelerators such as the Next Linear Collider (NLC) will accelerate multi-bunch beams of high current and low emittance to obtain high luminosity, which put stringent requirements on the accelerating structures for efficiency and beam stability. While numerical modeling has been quite standard in accelerator R and D, designing the NLC accelerating structure required a new simulation capability because of the geometric complexity and level of accuracy involved. Under the US DOE Advanced Computing initiatives (first the Grand Challenge and now SciDAC), SLAC has developed a suite of electromagnetic codes based on unstructured grids and utilizing high-performance computing to provide an advanced tool for modeling structures at accuracies and scales previously not possible. This paper will discuss the code development and computational science research (e.g. domain decomposition, scalable eigensolvers, adaptive mesh refinement) that have enabled the large-scale simulations needed for meeting the computational challenges posed by the NLC as well as projects such as the PEP-II and RIA. Numerical results will be presented to show how high-performance computing has made a qualitative improvement in accelerator structure modeling for these accelerators, either at the component level (single cell optimization), or on the scale of an entire structure (beam heating and long-range wakefields)

  19. Parallel Framework for Dimensionality Reduction of Large-Scale Datasets

    Directory of Open Access Journals (Sweden)

    Sai Kiranmayee Samudrala

    2015-01-01

    Full Text Available Dimensionality reduction refers to a set of mathematical techniques used to reduce complexity of the original high-dimensional data, while preserving its selected properties. Improvements in simulation strategies and experimental data collection methods are resulting in a deluge of heterogeneous and high-dimensional data, which often makes dimensionality reduction the only viable way to gain qualitative and quantitative understanding of the data. However, existing dimensionality reduction software often does not scale to datasets arising in real-life applications, which may consist of thousands of points with millions of dimensions. In this paper, we propose a parallel framework for dimensionality reduction of large-scale data. We identify key components underlying the spectral dimensionality reduction techniques, and propose their efficient parallel implementation. We show that the resulting framework can be used to process datasets consisting of millions of points when executed on a 16,000-core cluster, which is beyond the reach of currently available methods. To further demonstrate applicability of our framework we perform dimensionality reduction of 75,000 images representing morphology evolution during manufacturing of organic solar cells in order to identify how processing parameters affect morphology evolution.

  20. Low-Temperature Soft-Cover Deposition of Uniform Large-Scale Perovskite Films for High-Performance Solar Cells.

    Science.gov (United States)

    Ye, Fei; Tang, Wentao; Xie, Fengxian; Yin, Maoshu; He, Jinjin; Wang, Yanbo; Chen, Han; Qiang, Yinghuai; Yang, Xudong; Han, Liyuan

    2017-09-01

    Large-scale high-quality perovskite thin films are crucial to produce high-performance perovskite solar cells. However, for perovskite films fabricated by solvent-rich processes, film uniformity can be prevented by convection during thermal evaporation of the solvent. Here, a scalable low-temperature soft-cover deposition (LT-SCD) method is presented, where the thermal convection-induced defects in perovskite films are eliminated through a strategy of surface tension relaxation. Compact, homogeneous, and convection-induced-defects-free perovskite films are obtained on an area of 12 cm 2 , which enables a power conversion efficiency (PCE) of 15.5% on a solar cell with an area of 5 cm 2 . This is the highest efficiency at this large cell area. A PCE of 15.3% is also obtained on a flexible perovskite solar cell deposited on the polyethylene terephthalate substrate owing to the advantage of presented low-temperature processing. Hence, the present LT-SCD technology provides a new non-spin-coating route to the deposition of large-area uniform perovskite films for both rigid and flexible perovskite devices. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Grain boundary engineering with nano-scale InSb producing high performance InxCeyCo4Sb12+z skutterudite thermoelectrics

    Directory of Open Access Journals (Sweden)

    Han Li

    2017-12-01

    Full Text Available Thermoelectric semiconductors based on CoSb3 hold the best promise for recovering industrial or automotive waste heat because of their high efficiency and relatively abundant, lead-free constituent elements. However, higher efficiency is needed before thermoelectrics reach economic viability for widespread use. In this study, n-type InxCeyCo4Sb12+z skutterudites with high thermoelectric performance are produced by combining several phonon scattering mechanisms in a panoscopic synthesis. Using melt spinning followed by spark plasma sintering (MS-SPS, bulk InxCeyCo4Sb12+z alloys are formed with grain boundaries decorated with nano-phase of InSb. The skutterudite matrix has grains on a scale of 100–200 nm and the InSb nano-phase with a typical size of 5–15 nm is evenly dispersed at the grain boundaries of the skutterudite matrix. Coupled with the presence of defects on the Sb sublattice, this multi-scale nanometer structure is exceptionally effective in scattering phonons and, therefore, InxCeyCo4Sb12/InSb nano-composites have very low lattice thermal conductivity and high zT values reaching in excess of 1.5 at 800 K.

  2. High-Performance Small-Scale Solvers for Moving Horizon Estimation

    DEFF Research Database (Denmark)

    Frison, Gianluca; Vukov, Milan; Poulsen, Niels Kjølstad

    2015-01-01

    implementation techniques focusing on small-scale problems. The proposed MHE solver is implemented using custom linear algebra routines and is compared against implementations using BLAS libraries. Additionally, the MHE solver is interfaced to a code generation tool for nonlinear model predictive control (NMPC...

  3. Optimization of a micro-scale, high throughput process development tool and the demonstration of comparable process performance and product quality with biopharmaceutical manufacturing processes.

    Science.gov (United States)

    Evans, Steven T; Stewart, Kevin D; Afdahl, Chris; Patel, Rohan; Newell, Kelcy J

    2017-07-14

    In this paper, we discuss the optimization and implementation of a high throughput process development (HTPD) tool that utilizes commercially available micro-liter sized column technology for the purification of multiple clinically significant monoclonal antibodies. Chromatographic profiles generated using this optimized tool are shown to overlay with comparable profiles from the conventional bench-scale and clinical manufacturing scale. Further, all product quality attributes measured are comparable across scales for the mAb purifications. In addition to supporting chromatography process development efforts (e.g., optimization screening), comparable product quality results at all scales makes this tool is an appropriate scale model to enable purification and product quality comparisons of HTPD bioreactors conditions. The ability to perform up to 8 chromatography purifications in parallel with reduced material requirements per run creates opportunities for gathering more process knowledge in less time. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  4. Decreased attention to object size information in scale errors performers.

    Science.gov (United States)

    Grzyb, Beata J; Cangelosi, Angelo; Cattani, Allegra; Floccia, Caroline

    2017-05-01

    Young children sometimes make serious attempts to perform impossible actions on miniature objects as if they were full-size objects. The existing explanations of these curious action errors assume (but never explicitly tested) children's decreased attention to object size information. This study investigated the attention to object size information in scale errors performers. Two groups of children aged 18-25 months (N=52) and 48-60 months (N=23) were tested in two consecutive tasks: an action task that replicated the original scale errors elicitation situation, and a looking task that involved watching on a computer screen actions performed with adequate to inadequate size object. Our key finding - that children performing scale errors in the action task subsequently pay less attention to size changes than non-scale errors performers in the looking task - suggests that the origins of scale errors in childhood operate already at the perceptual level, and not at the action level. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. A new synoptic scale resolving global climate simulation using the Community Earth System Model

    Science.gov (United States)

    Small, R. Justin; Bacmeister, Julio; Bailey, David; Baker, Allison; Bishop, Stuart; Bryan, Frank; Caron, Julie; Dennis, John; Gent, Peter; Hsu, Hsiao-ming; Jochum, Markus; Lawrence, David; Muñoz, Ernesto; diNezio, Pedro; Scheitlin, Tim; Tomas, Robert; Tribbia, Joseph; Tseng, Yu-heng; Vertenstein, Mariana

    2014-12-01

    High-resolution global climate modeling holds the promise of capturing planetary-scale climate modes and small-scale (regional and sometimes extreme) features simultaneously, including their mutual interaction. This paper discusses a new state-of-the-art high-resolution Community Earth System Model (CESM) simulation that was performed with these goals in mind. The atmospheric component was at 0.25° grid spacing, and ocean component at 0.1°. One hundred years of "present-day" simulation were completed. Major results were that annual mean sea surface temperature (SST) in the equatorial Pacific and El-Niño Southern Oscillation variability were well simulated compared to standard resolution models. Tropical and southern Atlantic SST also had much reduced bias compared to previous versions of the model. In addition, the high resolution of the model enabled small-scale features of the climate system to be represented, such as air-sea interaction over ocean frontal zones, mesoscale systems generated by the Rockies, and Tropical Cyclones. Associated single component runs and standard resolution coupled runs are used to help attribute the strengths and weaknesses of the fully coupled run. The high-resolution run employed 23,404 cores, costing 250 thousand processor-hours per simulated year and made about two simulated years per day on the NCAR-Wyoming supercomputer "Yellowstone."

  6. Research on performance evaluation and anti-scaling mechanism of green scale inhibitors by static and dynamic methods

    International Nuclear Information System (INIS)

    Liu, D.

    2011-01-01

    Increasing environmental concerns and discharge limitations have imposed additional challenges in treating process waters. Thus, the concept of 'Green Chemistry' was proposed and green scale inhibitors became a focus of water treatment technology. Finding some economical and environmentally friendly inhibitors is one of the major research focuses nowadays. In this dissertation, the inhibition performance of different phosphonates as CaCO 3 scale inhibitors in simulated cooling water was evaluated. Homo-, co-, and ter-polymers were also investigated for their performance as Ca-phosphonate inhibitors. Addition of polymers as inhibitors with phosphonates could reduce Ca-phosphonate precipitation and enhance the inhibition efficiency for CaCO 3 scale. The synergistic effect of poly-aspartic acid (PASP) and Poly-epoxy-succinic acid (PESA) on inhibition of scaling has been studied using both static and dynamic methods. Results showed that the anti-scaling performance of PASP combined with PESA was superior to that of PASP or PESA alone for CaCO 3 , CaSO 4 and BaSO 4 scale. The influence of dosage, temperature and Ca 2+ concentration was also investigated in simulated cooling water circuit. Moreover, SEM analysis demonstrated the modification of crystalline morphology in the presence of PASP and PESA. In this work, we also investigated the respective inhibition effectiveness of copper and zinc ions for scaling in drinking water by the method of Rapid Controlled Precipitation (RCP). The results indicated that the zinc ion and copper ion were high efficient inhibitors of low concentration, and the analysis of SEM and IR showed that copper and zinc ions could affect the calcium carbonate germination and change the crystal morphology. Moreover, the influence of temperature and dissolved CO 2 on the scaling potential of a mineral water (Salvetat) in the presence of copper and zinc ions was studied by laboratory experiments. An ideal scale inhibitor should be a solid form

  7. Desemprego e inatividade nas metrópoles brasileiras: as diferenças entre homens e mulheres

    Directory of Open Access Journals (Sweden)

    Pedro Rodrigues de Oliveira

    2011-01-01

    Full Text Available Analisar a evolução recente da estrutura do desemprego e da inatividade nas metrópoles brasileiras é o objetivo deste trabalho. Além de um panorama geral, foram realizadas análises separadas por gênero. A resposta da inatividade para a variável "número de crianças no domicílio" é a que mais se destaca: há uma relação negativa para os homens, e positiva para as mulheres. Além disso, os padrões observados entre mulheres pobres e não pobres são muito diferenciados – a inatividade para as mulheres de renda baixa é significativamente maior –, refletindo diferenças de escolaridade e, provavelmente, dificuldades de acesso à creche.

  8. Automatic Energy Schemes for High Performance Applications

    Energy Technology Data Exchange (ETDEWEB)

    Sundriyal, Vaibhav [Iowa State Univ., Ames, IA (United States)

    2013-01-01

    Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale. Drastic increases in the power consumption of supercomputers affect significantly their operating costs and failure rates. In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This work first studies two important collective communication operations, all-to-all and allgather and proposes energy saving strategies on the per-call basis. Next, it targets point-to-point communications to group them into phases and apply frequency scaling to them to save energy by exploiting the architectural and communication stalls. Finally, it proposes an automatic runtime system which combines both collective and point-to-point communications into phases, and applies throttling to them apart from DVFS to maximize energy savings. The experimental results are presented for NAS parallel benchmark problems as well as for the realistic parallel electronic structure calculations performed by the widely used quantum chemistry package GAMESS. Close to the maximum energy savings were obtained with a substantially low performance loss on the given platform.

  9. Plurinacionalidade e cosmopolitismo: a diversidade cultural das cidades e diversidade comportamental nas metrópoles

    Directory of Open Access Journals (Sweden)

    José Luiz Quadros de Magalhães

    2009-12-01

    Full Text Available O artigo analisa dois fenômenos contemporâneos importantes: a formação do estado plurinacional como ruptura com o estado moderno, nacional e uniformizador e as múltiplas identidades nas metrópoles cosmopolitas contemporâneas. Analisando a formação do estado nacional como uniformizador e não democrático, o texto procura estabelecer uma conexão entre os dois fenômenos e busca uma solução democrática plural que, ao mesmo tempo que reconhece as múltiplas identificações, busca um traço comum de humanidade em cada pessoa, que permita a construção de espaços plurais de dialogo em condição de igualdade na diversidade, contra o risco da fragmentação excessiva de caráter intolerante ou fascista.

  10. Feasibility of large-scale phosphoproteomics with higher energy collisional dissociation fragmentation

    DEFF Research Database (Denmark)

    Nagaraj, Nagarjuna; D'Souza, Rochelle C J; Cox, Juergen

    2010-01-01

    Mass spectrometry (MS)-based proteomics now enables the analysis of thousands of phosphorylation sites in single projects. Among a wide range of analytical approaches, the combination of high resolution MS scans in an Orbitrap analyzer with low resolution MS/MS scans in a linear ion trap has proven......-scale phosphoproteome analysis alongside collisional induced dissociation, (CID) and electron capture/transfer dissociation (ECD/ETD)....

  11. Computational challenges of large-scale, long-time, first-principles molecular dynamics

    International Nuclear Information System (INIS)

    Kent, P R C

    2008-01-01

    Plane wave density functional calculations have traditionally been able to use the largest available supercomputing resources. We analyze the scalability of modern projector-augmented wave implementations to identify the challenges in performing molecular dynamics calculations of large systems containing many thousands of electrons. Benchmark calculations on the Cray XT4 demonstrate that global linear-algebra operations are the primary reason for limited parallel scalability. Plane-wave related operations can be made sufficiently scalable. Improving parallel linear-algebra performance is an essential step to reaching longer timescales in future large-scale molecular dynamics calculations

  12. First report of Geosmithia morbida on ambrosia beetles emerged from thousand cankers-diseased Juglans nigra in Ohio

    Science.gov (United States)

    Jennifer Juzwik; M. McDermott-Kubeczko; T. J. Stewart; M. D. Ginzel

    2016-01-01

    Eastern black walnut (Juglans nigra) is a highly-valued species for timber and nut production in the eastern United States. Thousand cankers disease (TCD), caused by the interaction of the walnut twig beetle (Pityophthorus juglandis) and the canker fungus Geosmithia morbida (Tisserat et al. 2009), was first...

  13. Assessing the performance of multi-purpose channel management measures at increasing scales

    Science.gov (United States)

    Wilkinson, Mark; Addy, Steve

    2016-04-01

    In addition to hydroclimatic drivers, sediment deposition from high energy river systems can reduce channel conveyance capacity and lead to significant increases in flood risk. There is an increasing recognition that we need to work with the interplay of natural hydrological and morphological processes in order to attenuate flood flows and manage sediment (both coarse and fine). This typically includes both catchment (e.g. woodland planting, wetlands) and river (e.g. wood placement, floodplain reconnection) restoration approaches. The aim of this work was to assess at which scales channel management measures (notably wood placement and flood embankment removal) are most appropriate for flood and sediment management in high energy upland river systems. We present research findings from two densely instrumented research sites in Scotland which regularly experience flood events and have associated coarse sediment problems. We assessed the performance of a range of novel trial measures for three different scales: wooded flow restrictors and gully tree planting at the small scale (transport to optimise performance. At the large scale, well designed flood embankment lowering can improve connectivity to the floodplain during low to medium return period events. However, ancillary works to stabilise the bank failed thus emphasising the importance of letting natural processes readjust channel morphology and hydrological connections to the floodplain. Although these trial measures demonstrated limited effects, this may be in part owing to restrictions in the range of hydroclimatological conditions during the study period and further work is needed to assess the performance under more extreme conditions. This work will contribute to refining guidance for managing channel coarse sediment problems in the future which in turn could help mitigate flooding using natural approaches.

  14. High Performance Redox Flow Batteries: An Analysis of the Upper Performance Limits of Flow Batteries Using Non-aqueous Solvents

    International Nuclear Information System (INIS)

    Sun, C.-N.; Mench, M.M.; Zawodzinski, T.A.

    2017-01-01

    Redox Flow Batteries (RFBs) are a promising technology for grid-scale electrochemical energy storage. In this work, we use a recently achieved high-performance flow battery performance curve as a basis to assess the maximum achievable performance of a RFB employing non-aqueous solutions as active materials. First we show high performance in a vanadium redox flow battery (VRFB), specifically a limiting situation in which the cell losses are ohmic in nature and derive from electrolyte conductance. Based on that case, we analyze the analogous limiting behavior of non-aqueous (NA) systems using a series of calculations assuming similar ohmic losses, scaled by the relative electrolyte resistances, with a higher voltage redox couple assumed for the NA battery. The results indicate that the NA battery performance is limited by the low electrolyte conductivity to a fraction of the performance of the VRFB. Given the narrow window in which the NARFB offers advantages, even for the most generous limiting assumptions related to performance while ignoring the numerous other disadvantageous aspects of these systems, we conclude that this technology is unlikely under present circumstances to provide practical large-scale energy storage solutions.

  15. From hundreds to thousands: Widening the normal human Urinome

    Directory of Open Access Journals (Sweden)

    Laura Santucci

    2014-12-01

    The data are related to Santucci et al. (in press [1] and available both here and at ChorusProject.org under project name “From hundreds to thousands: widening the normal human Urinome”. The material supplied to Chorus Progect.org includes technical MS spectra data only.

  16. Leituras e leitores de Richard Morse: a trajetória de um livro sobre a formação da metrópole paulista

    Directory of Open Access Journals (Sweden)

    Ana Claudia Veiga de Castro

    2013-01-01

    Full Text Available De comunidade à metrópole: a biografia de São Paulo , was first published in 1954 and then re published in 1970 as Formação histórica de São Paulo: de comunidade à metrópole . Written by a young US researcher fascinated by Latin America, this material was originally submitted as his PhD thesis at Columbia University in 1952. Since then, Rich - ard Morse’s (1922-2001 work has come a long way and is now considered a primary reference in the history of urban development of São Paulo. This article briefly recovers the reader’s response when Morse’s research was first published, and how it ensured the book’s importance in the Brazilian historiography. The aim is to draw a parallel trajectory of the book and its author – the young researcher at Columbia who became a professor of Latin American History at Yale – and to discuss the meanings regarding its importance in São Paulo’s historiography as well as its contribution to a better understanding of the city.

  17. Precision ring rolling technique and application in high-performance bearing manufacturing

    Directory of Open Access Journals (Sweden)

    Hua Lin

    2015-01-01

    Full Text Available High-performance bearing has significant application in many important industry fields, like automobile, precision machine tool, wind power, etc. Precision ring rolling is an advanced rotary forming technique to manufacture high-performance seamless bearing ring thus can improve the working life of bearing. In this paper, three kinds of precision ring rolling techniques adapt to different dimensional ranges of bearings are introduced, which are cold ring rolling for small-scale bearing, hot radial ring rolling for medium-scale bearing and hot radial-axial ring rolling for large-scale bearing. The forming principles, technological features and forming equipments for three kinds of precision ring rolling techniques are summarized, the technological development and industrial application in China are introduced, and the main technological development trend is described.

  18. A Review of Lightweight Thread Approaches for High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Castello, Adrian; Pena, Antonio J.; Seo, Sangmin; Mayo, Rafael; Balaji, Pavan; Quintana-Orti, Enrique S.

    2016-09-12

    High-level, directive-based solutions are becoming the programming models (PMs) of the multi/many-core architectures. Several solutions relying on operating system (OS) threads perfectly work with a moderate number of cores. However, exascale systems will spawn hundreds of thousands of threads in order to exploit their massive parallel architectures and thus conventional OS threads are too heavy for that purpose. Several lightweight thread (LWT) libraries have recently appeared offering lighter mechanisms to tackle massive concurrency. In order to examine the suitability of LWTs in high-level runtimes, we develop a set of microbenchmarks consisting of commonlyfound patterns in current parallel codes. Moreover, we study the semantics offered by some LWT libraries in order to expose the similarities between different LWT application programming interfaces. This study reveals that a reduced set of LWT functions can be sufficient to cover the common parallel code patterns and that those LWT libraries perform better than OS threads-based solutions in cases where task and nested parallelism are becoming more popular with new architectures.

  19. Impact of continuing scaling on the device performance of 3D cylindrical junction-less charge trapping memory

    International Nuclear Information System (INIS)

    Li Xinkai; Huo Zongliang; Jin Lei; Jiang Dandan; Hong Peizhen; Xu Qiang; Tang Zhaoyun; Li Chunlong; Ye Tianchun

    2015-01-01

    This work presents a comprehensive analysis of 3D cylindrical junction-less charge trapping memory device performance regarding continuous scaling of the structure dimensions. The key device performance, such as program/erase speed, vertical charge loss, and lateral charge migration under high temperature are intensively studied using the Sentaurus 3D device simulator. Although scaling of channel radius is beneficial for operation speed improvement, it leads to a retention challenge due to vertical leakage, especially enhanced charge loss through TPO. Scaling of gate length not only decreases the program/erase speed but also leads to worse lateral charge migration. Scaling of spacer length is critical for the interference of adjacent cells and should be carefully optimized according to specific cell operation conditions. The gate stack shape is also found to be an important factor affecting the lateral charge migration. Our results provide guidance for high density and high reliability 3D CTM integration. (paper)

  20. High nitrogen-containing cotton derived 3D porous carbon frameworks for high-performance supercapacitors

    Science.gov (United States)

    Fan, Li-Zhen; Chen, Tian-Tian; Song, Wei-Li; Li, Xiaogang; Zhang, Shichao

    2015-01-01

    Supercapacitors fabricated by 3D porous carbon frameworks, such as graphene- and carbon nanotube (CNT)-based aerogels, have been highly attractive due to their various advantages. However, their high cost along with insufficient yield has inhibited their large-scale applications. Here we have demonstrated a facile and easily scalable approach for large-scale preparing novel 3D nitrogen-containing porous carbon frameworks using ultralow-cost commercial cotton. Electrochemical performance suggests that the optimal nitrogen-containing cotton-derived carbon frameworks with a high nitrogen content (12.1 mol%) along with low surface area 285 m2 g−1 present high specific capacities of the 308 and 200 F g−1 in KOH electrolyte at current densities of 0.1 and 10 A g−1, respectively, with very limited capacitance loss upon 10,000 cycles in both aqueous and gel electrolytes. Moreover, the electrode exhibits the highest capacitance up to 220 F g−1 at 0.1 A g−1 and excellent flexibility (with negligible capacitance loss under different bending angles) in the polyvinyl alcohol/KOH gel electrolyte. The observed excellent performance competes well with that found in the electrodes of similar 3D frameworks formed by graphene or CNTs. Therefore, the ultralow-cost and simply strategy here demonstrates great potential for scalable producing high-performance carbon-based supercapacitors in the industry. PMID:26472144

  1. High nitrogen-containing cotton derived 3D porous carbon frameworks for high-performance supercapacitors

    Science.gov (United States)

    Fan, Li-Zhen; Chen, Tian-Tian; Song, Wei-Li; Li, Xiaogang; Zhang, Shichao

    2015-10-01

    Supercapacitors fabricated by 3D porous carbon frameworks, such as graphene- and carbon nanotube (CNT)-based aerogels, have been highly attractive due to their various advantages. However, their high cost along with insufficient yield has inhibited their large-scale applications. Here we have demonstrated a facile and easily scalable approach for large-scale preparing novel 3D nitrogen-containing porous carbon frameworks using ultralow-cost commercial cotton. Electrochemical performance suggests that the optimal nitrogen-containing cotton-derived carbon frameworks with a high nitrogen content (12.1 mol%) along with low surface area 285 m2 g-1 present high specific capacities of the 308 and 200 F g-1 in KOH electrolyte at current densities of 0.1 and 10 A g-1, respectively, with very limited capacitance loss upon 10,000 cycles in both aqueous and gel electrolytes. Moreover, the electrode exhibits the highest capacitance up to 220 F g-1 at 0.1 A g-1 and excellent flexibility (with negligible capacitance loss under different bending angles) in the polyvinyl alcohol/KOH gel electrolyte. The observed excellent performance competes well with that found in the electrodes of similar 3D frameworks formed by graphene or CNTs. Therefore, the ultralow-cost and simply strategy here demonstrates great potential for scalable producing high-performance carbon-based supercapacitors in the industry.

  2. Results from core-edge experiments in high Power, high performance plasmas on DIII-D

    Directory of Open Access Journals (Sweden)

    T.W. Petrie

    2017-08-01

    Full Text Available Significant challenges to reducing divertor heat flux in highly powered near-double null divertor (DND hybrid plasmas, while still maintaining both high performance metrics and low enough density for application of RF heating, are identified. For these DNDs on DIII-D, the scaling of the peak heat flux at the outer target (q⊥P ∝ [PSOL x IP] 0.92 for PSOL= 8−19MW and IP= 1.0–1.4MA, and is consistent with standard ITPA scaling for single-null H-mode plasmas. Two divertor heat flux reduction methods were tested. First, applying the puff-and-pump radiating divertor to DIII-D plasmas may be problematical at high power and H98 (≥ 1.5 due to improvement in confinement time with deuterium gas puffing which can lead to unacceptably high core density under certain conditions. Second, q⊥P for these high performance DNDs was reduced by ≈35% when an open divertor is closed on the common flux side of the outer divertor target (“semi-slot” but also that heating near the slot opening is a significant source for impurity contamination of the core.

  3. Co-Cure-Ply Resins for High Performance, Large-Scale Structures

    Data.gov (United States)

    National Aeronautics and Space Administration — Large-scale composite structures are commonly joined by secondary bonding of molded-and-cured thermoset components. This approach may result in unpredictable joint...

  4. Small-scale tunnel test for blast performance

    International Nuclear Information System (INIS)

    Felts, J E; Lee, R J

    2014-01-01

    The data reported here provide a validation of a small-scale tunnel test as a tool to guide the optimization of new explosives for blast performance in tunnels. The small-scale arrangement consisted of a 2-g booster and 10-g sample mounted at the closed end of a 127 mm diameter by 4.6-m long steel tube with pressure transducers along its length. The three performance characteristics considered were peak pressure, initial energy release, and impulse. The relative performance from five explosives was compared to that from a 1.16-m diameter by 30-m long tunnel that used 2.27-kg samples. The peak pressure values didn't correlate between the tunnels. Partial impulse for the explosives did rank similarly. The initial energy release was determined from a one-dimensional point-source analysis, which nearly tracked with impulse suggesting additional energy released further down the tunnel for some explosives. This test is a viable tool for optimizing compositional variations for blast performance in target scenarios of similar geometry.

  5. Solution-Processed Graphene/MnO 2 Nanostructured Textiles for High-Performance Electrochemical Capacitors

    KAUST Repository

    Yu, Guihua

    2011-07-13

    Large scale energy storage system with low cost, high power, and long cycle life is crucial for addressing the energy problem when connected with renewable energy production. To realize grid-scale applications of the energy storage devices, there remain several key issues including the development of low-cost, high-performance materials that are environmentally friendly and compatible with low-temperature and large-scale processing. In this report, we demonstrate that solution-exfoliated graphene nanosheets (∼5 nm thickness) can be conformably coated from solution on three-dimensional, porous textiles support structures for high loading of active electrode materials and to facilitate the access of electrolytes to those materials. With further controlled electrodeposition of pseudocapacitive MnO2 nanomaterials, the hybrid graphene/MnO2-based textile yields high-capacitance performance with specific capacitance up to 315 F/g achieved. Moreover, we have successfully fabricated asymmetric electrochemical capacitors with graphene/MnO 2-textile as the positive electrode and single-walled carbon nanotubes (SWNTs)-textile as the negative electrode in an aqueous Na 2SO4 electrolyte solution. These devices exhibit promising characteristics with a maximum power density of 110 kW/kg, an energy density of 12.5 Wh/kg, and excellent cycling performance of ∼95% capacitance retention over 5000 cycles. Such low-cost, high-performance energy textiles based on solution-processed graphene/MnO2 hierarchical nanostructures offer great promise in large-scale energy storage device applications. © 2011 American Chemical Society.

  6. Centrifugal fans: Similarity, scaling laws, and fan performance

    Science.gov (United States)

    Sardar, Asad Mohammad

    Centrifugal fans are rotodynamic machines used for moving air continuously against moderate pressures through ventilation and air conditioning systems. There are five major topics presented in this thesis: (1) analysis of the fan scaling laws and consequences of dynamic similarity on modelling; (2) detailed flow visualization studies (in water) covering the flow path starting at the fan blade exit to the evaporator core of an actual HVAC fan scroll-diffuser module; (3) mean velocity and turbulence intensity measurements (flow field studies) at the inlet and outlet of large scale blower; (4) fan installation effects on overall fan performance and evaluation of fan testing methods; (5) two point coherence and spectral measurements conducted on an actual HVAC fan module for flow structure identification of possible aeroacoustic noise sources. A major objective of the study was to identity flow structures within the HVAC module that are responsible for noise and in particular "rumble noise" generation. Possible mechanisms for the generation of flow induced noise in the automotive HVAC fan module are also investigated. It is demonstrated that different modes of HVAC operation represent very different internal flow characteristics. This has implications on both fan HVAC airflow performance and noise characteristics. It is demonstrated from principles of complete dynamic similarity that fan scaling laws require that Reynolds, number matching is a necessary condition for developing scale model fans or fan test facilities. The physical basis for the fan scaling laws derived was established from both pure dimensional analysis and also from the fundamental equations of fluid motion. Fan performance was measured in a three times scale model (large scale blower) in air of an actual forward curved automotive HVAC blower. Different fan testing methods (based on AMCA fan test codes) were compared on the basis of static pressure measurements. Also, the flow through an actual HVAC

  7. Fast Decentralized Averaging via Multi-scale Gossip

    Science.gov (United States)

    Tsianos, Konstantinos I.; Rabbat, Michael G.

    We are interested in the problem of computing the average consensus in a distributed fashion on random geometric graphs. We describe a new algorithm called Multi-scale Gossip which employs a hierarchical decomposition of the graph to partition the computation into tractable sub-problems. Using only pairwise messages of fixed size that travel at most O(n^{1/3}) hops, our algorithm is robust and has communication cost of O(n loglogn logɛ - 1) transmissions, which is order-optimal up to the logarithmic factor in n. Simulated experiments verify the good expected performance on graphs of many thousands of nodes.

  8. Cross-sectional fluctuation scaling in the high-frequency illiquidity of Chinese stocks

    Science.gov (United States)

    Cai, Qing; Gao, Xing-Lu; Zhou, Wei-Xing; Stanley, H. Eugene

    2018-03-01

    Taylor's law of temporal and ensemble fluctuation scaling has been ubiquitously observed in diverse complex systems including financial markets. Stock illiquidity is an important nonadditive financial quantity, which is found to comply with Taylor's temporal fluctuation scaling law. In this paper, we perform the cross-sectional analysis of the 1 min high-frequency illiquidity time series of Chinese stocks and unveil the presence of Taylor's law of ensemble fluctuation scaling. The estimated daily Taylor scaling exponent fluctuates around 1.442. We find that Taylor's scaling exponents of stock illiquidity do not relate to the ensemble mean and ensemble variety of returns. Our analysis uncovers a new scaling law of financial markets and might stimulate further investigations for a better understanding of financial markets' dynamics.

  9. A re-entrant flowshop heuristic for online scheduling of the paper path in a large scale printer

    NARCIS (Netherlands)

    Waqas, U.; Geilen, M.C.W.; Kandelaars, J.; Somers, L.J.A.M.; Basten, T.; Stuijk, S.; Vestjens, P.G.H.; Corporaal, H.

    2015-01-01

    A Large Scale Printer (LSP) is a Cyber Physical System (CPS) printing thousands of sheets per day with high quality. The print requests arrive at run-time requiring online scheduling. We capture the LSP scheduling problem as online scheduling of re-entrant flowshops with sequence dependent setup

  10. High-performance vertical organic transistors.

    Science.gov (United States)

    Kleemann, Hans; Günther, Alrun A; Leo, Karl; Lüssem, Björn

    2013-11-11

    Vertical organic thin-film transistors (VOTFTs) are promising devices to overcome the transconductance and cut-off frequency restrictions of horizontal organic thin-film transistors. The basic physical mechanisms of VOTFT operation, however, are not well understood and VOTFTs often require complex patterning techniques using self-assembly processes which impedes a future large-area production. In this contribution, high-performance vertical organic transistors comprising pentacene for p-type operation and C60 for n-type operation are presented. The static current-voltage behavior as well as the fundamental scaling laws of such transistors are studied, disclosing a remarkable transistor operation with a behavior limited by injection of charge carriers. The transistors are manufactured by photolithography, in contrast to other VOTFT concepts using self-assembled source electrodes. Fluorinated photoresist and solvent compounds allow for photolithographical patterning directly and strongly onto the organic materials, simplifying the fabrication protocol and making VOTFTs a prospective candidate for future high-performance applications of organic transistors. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. The effect of primary sedimentation on full-scale WWTP nutrient removal performance.

    Science.gov (United States)

    Puig, S; van Loosdrecht, M C M; Flameling, A G; Colprim, J; Meijer, S C F

    2010-06-01

    Traditionally, the performance of full-scale wastewater treatment plants (WWTPs) is measured based on influent and/or effluent and waste sludge flows and concentrations. Full-scale WWTP data typically have a high variance which often contains (large) measurement errors. A good process engineering evaluation of the WWTP performance is therefore difficult. This also makes it usually difficult to evaluate effect of process changes in a plant or compare plants to each other. In this paper we used a case study of a full-scale nutrient removing WWTP. The plant normally uses presettled wastewater, as a means to increase the nutrient removal the plant was operated for a period by-passing raw wastewater (27% of the influent flow). The effect of raw wastewater addition has been evaluated by different approaches: (i) influent characteristics, (ii) design retrofit, (iii) effluent quality, (iv) removal efficiencies, (v) activated sludge characteristics, (vi) microbial activity tests and FISH analysis and, (vii) performance assessment based on mass balance evaluation. This paper demonstrates that mass balance evaluation approach helps the WWTP engineers to distinguish and quantify between different strategies, where others could not. In the studied case, by-passing raw wastewater (27% of the influent flow) directly to the biological reactor did not improve the effluent quality and the nutrient removal efficiency of the WWTP. The increase of the influent C/N and C/P ratios was associated to particulate compounds with low COD/VSS ratio and a high non-biodegradable COD fraction. Copyright 2010 Elsevier Ltd. All rights reserved.

  12. LARGE SCALE DISTRIBUTED PARAMETER MODEL OF MAIN MAGNET SYSTEM AND FREQUENCY DECOMPOSITION ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    ZHANG,W.; MARNERIS, I.; SANDBERG, J.

    2007-06-25

    Large accelerator main magnet system consists of hundreds, even thousands, of dipole magnets. They are linked together under selected configurations to provide highly uniform dipole fields when powered. Distributed capacitance, insulation resistance, coil resistance, magnet inductance, and coupling inductance of upper and lower pancakes make each magnet a complex network. When all dipole magnets are chained together in a circle, they become a coupled pair of very high order complex ladder networks. In this study, a network of more than thousand inductive, capacitive or resistive elements are used to model an actual system. The circuit is a large-scale network. Its equivalent polynomial form has several hundred degrees. Analysis of this high order circuit and simulation of the response of any or all components is often computationally infeasible. We present methods to use frequency decomposition approach to effectively simulate and analyze magnet configuration and power supply topologies.

  13. Metrópole, legislação e desigualdade

    Directory of Open Access Journals (Sweden)

    Maricato Ermínia

    2003-08-01

    Full Text Available O processo de urbanização brasileiro deu-se praticamente no século XX. No entanto, ao contrário da expectativa de muitos, o universo urbano não superou algumas características dos períodos colonial e imperial, marcados pela concentração de terra, renda e poder, pelo exercício do coronelismo ou política do favor e pela aplicação arbitrária da lei. Este texto tem como objetivo fazer uma leitura da metrópole brasileira do final do século XX destacando a relação entre desigualdade social, segregação territorial e meio ambiente, tendo como pano de fundo alguns autores que refletiram sobre a "formação" da sociedade brasileira, em especial sobre a marca da modernização com desenvolvimento do atraso. Para tanto, destaca-se o papel da aplicação da lei para manutenção de poder concentrado e de privilégios nas cidades, o qual reflete - e ao mesmo tempo promove - a desigualdade social no território urbano.The brazilian urbanization process happened in practical terms in the XX century. Nevertless, contrary to the expectation of many, the urban universe didn't overcome some characteristics of the colonial and imperial periods marked by land, income and power concentration, by the action of "colonels" or the policy of favoritism and by an arbitrary law deployment. This paper has the objective to address the Brazilian metropolis by the end of the XX century making evident the relationship about social inequality, territorial segregation and environment, having as a reference some authors whose work reflected the Brazilian society "building", specially the mark of modernization with the development of the tardiness. In doing so, a great relevance is given to the role of the law in keeping concentrated power and privileges in the cities, which reflects - and at the same time enhances - the urban territory social inequality.

  14. High-Performance Java Codes for Computational Fluid Dynamics

    Science.gov (United States)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  15. Scaling violations at ultra-high energies

    International Nuclear Information System (INIS)

    Tung, W.K.

    1979-01-01

    The paper discusses some of the features of high energy lepton-hadron scattering, including the observed (Bjorken) scaling behavior. The cross-sections where all hadron final states are summed over, are examined and the general formulas for the differential cross-section are examined. The subjects of scaling, breaking and phenomenological consequences are studied, and a list of what ultra-high energy neutrino physics can teach QCD is given

  16. Understanding I/O workload characteristics of a Peta-scale storage system

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Youngjae [ORNL; Gunasekaran, Raghul [ORNL

    2015-01-01

    Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the I/O workloads of scientific applications of one of the world s fastest high performance computing (HPC) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). OLCF flagship petascale simulation platform, Titan, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize the system utilization, the demands of reads and writes, idle time, storage space utilization, and the distribution of read requests to write requests for the Peta-scale Storage Systems. From this study, we develop synthesized workloads, and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution. We also study the I/O load imbalance problems using I/O performance data collected from the Spider storage system.

  17. Really Large Scale Computer Graphic Projection Using Lasers and Laser Substitutes

    Science.gov (United States)

    Rother, Paul

    1989-07-01

    This paper reflects on past laser projects to display vector scanned computer graphic images onto very large and irregular surfaces. Since the availability of microprocessors and high powered visible lasers, very large scale computer graphics projection have become a reality. Due to the independence from a focusing lens, lasers easily project onto distant and irregular surfaces and have been used for amusement parks, theatrical performances, concert performances, industrial trade shows and dance clubs. Lasers have been used to project onto mountains, buildings, 360° globes, clouds of smoke and water. These methods have proven successful in installations at: Epcot Theme Park in Florida; Stone Mountain Park in Georgia; 1984 Olympics in Los Angeles; hundreds of Corporate trade shows and thousands of musical performances. Using new ColorRayTM technology, the use of costly and fragile lasers is no longer necessary. Utilizing fiber optic technology, the functionality of lasers can be duplicated for new and exciting projection possibilities. The use of ColorRayTM technology has enjoyed worldwide recognition in conjunction with Pink Floyd and George Michaels' world wide tours.

  18. [Nested species subsets of amphibians and reptiles in Thousand Island Lake].

    Science.gov (United States)

    Wang, Xi; Wang, Yan-Ping; Ding, Ping

    2012-10-01

    Habitat fragmentation is a main cause for the loss of biological diversity. Combining line-transect methods to survey the amphibians and reptiles on 23 islands on Thousand Island Lake in Zhejiang province, along with survey data on nearby plant species and habitat variables collected by GIS, we used the"BINMATNEST (binary matrix nestedness temperature calculator)" software and the Spearman rank correlation to examine whether amphibians and reptiles followed nested subsets and their influencing factors. The results showed that amphibians and reptiles were significantly nested, and that the island area and habitat type were significantly associated with their nested ranks. Therefore, to effectively protect amphibians and reptiles in the Thousand Islands Lake area we should pay prior attention to islands with larger areas and more habitat types.

  19. Optical interconnection networks for high-performance computing systems

    International Nuclear Information System (INIS)

    Biberman, Aleksandr; Bergman, Keren

    2012-01-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. (review article)

  20. ESTIMATING HIGH LEVEL WASTE MIXING PERFORMANCE IN HANFORD DOUBLE SHELL TANKS

    International Nuclear Information System (INIS)

    Thien, M.G.; Greer, D.A.; Townson, P.

    2011-01-01

    The ability to effectively mix, sample, certify, and deliver consistent batches of high level waste (HLW) feed from the Hanford double shell tanks (DSTs) to the Waste Treatment and Immobilization Plant (WTP) presents a significant mission risk with potential to impact mission length and the quantity of HLW glass produced. The Department of Energy's (DOE's) Tank Operations Contractor (TOC), Washington River Protection Solutions (WRPS) is currently demonstrating mixing, sampling, and batch transfer performance in two different sizes of small-scale DSTs. The results of these demonstrations will be used to estimate full-scale DST mixing performance and provide the key input to a programmatic decision on the need to build a dedicated feed certification facility. This paper discusses the results from initial mixing demonstration activities and presents data evaluation techniques that allow insight into the performance relationships of the two small tanks. The next steps, sampling and batch transfers, of the small scale demonstration activities are introduced. A discussion of the integration of results from the mixing, sampling, and batch transfer tests to allow estimating full-scale DST performance is presented.

  1. High performance high-κ/metal gate complementary metal oxide semiconductor circuit element on flexible silicon

    KAUST Repository

    Sevilla, Galo T.; Almuslem, A. S.; Gumus, Abdurrahman; Hussain, Aftab M.; Hussain, Aftab M.; Cruz, Melvin; Hussain, Muhammad Mustafa

    2016-01-01

    shows large area of silicon thinning with pre-fabricated high performance elements with ultra-large-scale-integration density (using 90 nm node technology) and then dicing of such large and thinned (seemingly fragile) pieces into smaller pieces using

  2. Performance of Linear and Nonlinear Two-Leaf Light Use Efficiency Models at Different Temporal Scales

    Directory of Open Access Journals (Sweden)

    Xiaocui Wu

    2015-02-01

    Full Text Available The reliable simulation of gross primary productivity (GPP at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn, a linear two-leaf model (TL-LUE, and a big-leaf light use efficiency model (MOD17 to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL-LUEn was slightly but not significantly better than TL-LUE at half-hourly and daily scale, while the overall performance of both TL-LUEn and TL-LUE were significantly better (p < 0.0001 than MOD17 at the two temporal scales. The improvement of TL-LUEn over TL-LUE was relatively small in comparison with the improvement of TL-LUE over MOD17. However, the differences between TL-LUEn and MOD17, and TL-LUE and MOD17 became less distinct at the 8-day scale. As for different vegetation types, TL-LUEn and TL-LUE performed better than MOD17 for all vegetation types except crops at the half-hourly scale. At the daily and 8-day scales, both TL-LUEn and TL-LUE outperformed MOD17 for forests. However, TL-LUEn had a mixed performance for the three non-forest types while TL-LUE outperformed MOD17 slightly for all these non-forest types at daily and 8-day scales. The better performance of TL-LUEn and TL-LUE for forests was mainly achieved by the correction of the underestimation/overestimation of GPP simulated by MOD17 under low/high solar radiation and sky clearness conditions. TL-LUEn is more applicable at individual sites at the half-hourly scale while TL-LUE could be regionally used at half-hourly, daily and 8-day scales. MOD17 is also an applicable option regionally at the 8-day scale.

  3. Harbour porpoise distribution can vary at small spatiotemporal scales in energetic habitats

    Science.gov (United States)

    Benjamins, Steven; van Geel, Nienke; Hastie, Gordon; Elliott, Jim; Wilson, Ben

    2017-07-01

    heterogeneous areas, porpoises can display significant spatiotemporal variability in site use at scales of hundreds of metres and hours. Such variability will not be identified when using solitary moored PAM detectors (a common practice for site-based cetacean monitoring), but may be highly relevant for site-based impact assessments of MRED and other coastal developments. PAM arrays encompassing several detectors spread across a site therefore appear to be a more appropriate tool to study site-specific cetacean use of spatiotemporally heterogeneous habitat and assess the potential impacts of coastal and nearshore developments at small scales.

  4. High voltage distribution scheme for large size GEM detector

    International Nuclear Information System (INIS)

    Saini, J.; Kumar, A.; Dubey, A.K.; Negi, V.S.; Chattopadhyay, S.

    2016-01-01

    Gas Electron Multiplier (GEM) detectors will be used for Muon tracking in the Compressed Baryonic Matter (CBM) experiment at the Facility for Anti-proton Ion Research (FAIR) at Darmstadt, Germany. The sizes of the detector modules in the Muon chambers are of the order of 1 metre x 0.5 metre. For construction of these chambers, three GEM foils are used per chamber. These foils are made by two layered 50μm thin kapton foil. Each GEM foil has millions of holes on it. In such a large scale manufacturing of the foils, even after stringent quality controls, some of the holes may still have defects or defects might develop over the time with operating conditions. These defects may result in short-circuit of the entire GEM foil. A short even in a single hole will make entire foil un-usable. To reduce such occurrences, high voltage (HV) segmentation within the foils has been introduced. These segments are powered either by individual HV supply per segment or through an active HV distribution to manage such a large number of segments across the foil. Individual supplies apart from being costly, are highly complex to implement. Additionally, CBM will have high intensity of particles bombarding on the detector causing the change of resistive chain current feeding the GEM detector with the variation in the intensity. This leads to voltage fluctuations across the foil resulting in the gain variation with the particle intensity. Hence, a low cost active HV distribution is designed to take care of the above discussed issues

  5. A multi-scale approach for high cycle anisotropic fatigue resistance: Application to forged components

    International Nuclear Information System (INIS)

    Milesi, M.; Chastel, Y.; Hachem, E.; Bernacki, M.; Loge, R.E.; Bouchard, P.O.

    2010-01-01

    Forged components exhibit good mechanical strength, particularly in terms of high cycle fatigue properties. This is due to the specific microstructure resulting from large plastic deformation as in a forging process. The goal of this study is to account for critical phenomena such as the anisotropy of the fatigue resistance in order to perform high cycle fatigue simulations on industrial forged components. Standard high cycle fatigue criteria usually give good results for isotropic behaviors but are not suitable for components with anisotropic features. The aim is to represent explicitly this anisotropy at a lower scale compared to the process scale and determined local coefficients needed to simulate a real case. We developed a multi-scale approach by considering the statistical morphology and mechanical characteristics of the microstructure to represent explicitly each element. From stochastic experimental data, realistic microstructures were reconstructed in order to perform high cycle fatigue simulations on it with different orientations. The meshing was improved by a local refinement of each interface and simulations were performed on each representative elementary volume. The local mechanical anisotropy is taken into account through the distribution of particles. Fatigue parameters identified at the microscale can then be used at the macroscale on the forged component. The linkage of these data and the process scale is the fiber vector and the deformation state, used to calculate global mechanical anisotropy. Numerical results reveal an expected behavior compared to experimental tendencies. We proved numerically the dependence of the anisotropy direction and the deformation state on the endurance limit evolution.

  6. The Relative Performance of High Resolution Quantitative Precipitation Estimates in the Russian River Basin

    Science.gov (United States)

    Bytheway, J. L.; Biswas, S.; Cifelli, R.; Hughes, M.

    2017-12-01

    The Russian River carves a 110 mile path through Mendocino and Sonoma counties in western California, providing water for thousands of residents and acres of agriculture as well as a home for several species of endangered fish. The Russian River basin receives almost all of its precipitation during the October through March wet season, and the systems bringing this precipitation are often impacted by atmospheric river events as well as the complex topography of the region. This study will examine the performance of several high resolution (hourly, products and forecasts over the 2015-2016 and 2016-2017 wet seasons. Comparisons of event total rainfall as well as hourly rainfall will be performed using 1) rain gauges operated by the National Oceanic and Atmospheric Administration (NOAA) Physical Sciences Division (PSD), 2) products from the Multi-Radar/Multi-Sensor (MRMS) QPE dataset, and 3) quantitative precipitation forecasts from the High Resolution Rapid Refresh (HRRR) model at 1, 3, 6, and 12 hour lead times. Further attention will be given to cases or locations representing large disparities between the estimates.

  7. Scale effect challenges in urban hydrology highlighted with a Fully Distributed Model and High-resolution rainfall data

    Science.gov (United States)

    Ichiba, Abdellah; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe; Ten Veldhuis, Marie-Claire

    2017-04-01

    Nowadays, there is a growing interest on small-scale rainfall information, provided by weather radars, to be used in urban water management and decision-making. Therefore, an increasing interest is in parallel devoted to the development of fully distributed and grid-based models following the increase of computation capabilities, the availability of high-resolution GIS information needed for such models implementation. However, the choice of an appropriate implementation scale to integrate the catchment heterogeneity and the whole measured rainfall variability provided by High-resolution radar technologies still issues. This work proposes a two steps investigation of scale effects in urban hydrology and its effects on modeling works. In the first step fractal tools are used to highlight the scale dependency observed within distributed data used to describe the catchment heterogeneity, both the structure of the sewer network and the distribution of impervious areas are analyzed. Then an intensive multi-scale modeling work is carried out to understand scaling effects on hydrological model performance. Investigations were conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model was implemented at 17 spatial resolutions ranging from 100 m to 5 m and modeling investigations were performed using both rain gauge rainfall information as well as high resolution X band radar data in order to assess the sensitivity of the model to small scale rainfall variability. Results coming out from this work demonstrate scale effect challenges in urban hydrology modeling. In fact, fractal concept highlights the scale dependency observed within distributed data used to implement hydrological models. Patterns of geophysical data change when we change the observation pixel size. The multi-scale modeling investigation performed with Multi-Hydro model at 17 spatial resolutions confirms scaling effect on hydrological model

  8. DITTY - a computer program for calculating population dose integrated over ten thousand years

    International Nuclear Information System (INIS)

    Napier, B.A.; Peloquin, R.A.; Strenge, D.L.

    1986-03-01

    The computer program DITTY (Dose Integrated Over Ten Thousand Years) was developed to determine the collective dose from long term nuclear waste disposal sites resulting from the ground-water pathways. DITTY estimates the time integral of collective dose over a ten-thousand-year period for time-variant radionuclide releases to surface waters, wells, or the atmosphere. This document includes the following information on DITTY: a description of the mathematical models, program designs, data file requirements, input preparation, output interpretations, sample problems, and program-generated diagnostic messages

  9. Copy of Using Emulation and Simulation to Understand the Large-Scale Behavior of the Internet.

    Energy Technology Data Exchange (ETDEWEB)

    Adalsteinsson, Helgi; Armstrong, Robert C.; Chiang, Ken; Gentile, Ann C.; Lloyd, Levi; Minnich, Ronald G.; Vanderveen, Keith; Van Randwyk, Jamie A; Rudish, Don W.

    2008-10-01

    We report on the work done in the late-start LDRDUsing Emulation and Simulation toUnderstand the Large-Scale Behavior of the Internet. We describe the creation of a researchplatform that emulates many thousands of machines to be used for the study of large-scale inter-net behavior. We describe a proof-of-concept simple attack we performed in this environment.We describe the successful capture of a Storm bot and, from the study of the bot and furtherliterature search, establish large-scale aspects we seek to understand via emulation of Storm onour research platform in possible follow-on work. Finally, we discuss possible future work.3

  10. Millennium scale radiocarbon variations in Eastern North Atlantic thermocline waters: 0-7000 years

    Energy Technology Data Exchange (ETDEWEB)

    Frank, N.; Tisnerat-Laborde, N.; Hatte, C. [LSCE, F-91190 Gif Sur Yvette, (France); Colin, C. [Univ Paris 11, IDES, Orsay, (France); Dottori, M.; Reverdin, G. [Univ Paris 06, LOCEAN, F-75252 Paris, (France)

    2009-07-01

    Complete text of publication follows: Deep water corals are exceptional archives of modern and past ocean circulation as combined U-series and radiocarbon dating allows to reconstruct seawater radiocarbon. Here we present thermocline water radiocarbon concentrations that have been reconstructed for the past {approx} 7000 years for the eastern north Atlantic, based on deep-water corals from Rockall Bank and Porcupine Seabight. We find that thermocline water radiocarbon values follow overall the mean atmospheric long term trend with an average offset of {Delta}{sup 14}C between intermediate water and atmosphere of -55{+-}5 per thousand until 1960 AD. Residual variations are strong ({+-}25 per thousand) over the past 7000 years and there is first evidence that those are synchronous to millennium scale climate variability. Over the past 60 years thermocline water radiocarbon values increase due to the penetration of bomb-radiocarbon into the upper intermediate ocean. Radiocarbon increases by {Delta}{sup 14}C of +95 per thousand compared to +210 per thousand for eastern North Atlantic surface waters. Moreover, bomb-radiocarbon penetration to thermocline depth occurs with a delay of {approx} 10-15 years. Based on high resolution ocean circulation models we suggest that radiocarbon changes at upper intermediate depth are today barely affected by vertical mixing and represent more likely variable advection and mixing of water masses from the Labrador Sea and the temperate Atlantic (including Mediterranean outflow water). Consequently, we assume that residual radiocarbon variations over the past 7000 years reflect millennium scale variability of the Atlantic sub-polar and sub-tropical gyres

  11. A hybrid procedure for MSW generation forecasting at multiple time scales in Xiamen City, China

    International Nuclear Information System (INIS)

    Xu, Lilai; Gao, Peiqing; Cui, Shenghui; Liu, Chun

    2013-01-01

    Highlights: ► We propose a hybrid model that combines seasonal SARIMA model and grey system theory. ► The model is robust at multiple time scales with the anticipated accuracy. ► At month-scale, the SARIMA model shows good representation for monthly MSW generation. ► At medium-term time scale, grey relational analysis could yield the MSW generation. ► At long-term time scale, GM (1, 1) provides a basic scenario of MSW generation. - Abstract: Accurate forecasting of municipal solid waste (MSW) generation is crucial and fundamental for the planning, operation and optimization of any MSW management system. Comprehensive information on waste generation for month-scale, medium-term and long-term time scales is especially needed, considering the necessity of MSW management upgrade facing many developing countries. Several existing models are available but of little use in forecasting MSW generation at multiple time scales. The goal of this study is to propose a hybrid model that combines the seasonal autoregressive integrated moving average (SARIMA) model and grey system theory to forecast MSW generation at multiple time scales without needing to consider other variables such as demographics and socioeconomic factors. To demonstrate its applicability, a case study of Xiamen City, China was performed. Results show that the model is robust enough to fit and forecast seasonal and annual dynamics of MSW generation at month-scale, medium- and long-term time scales with the desired accuracy. In the month-scale, MSW generation in Xiamen City will peak at 132.2 thousand tonnes in July 2015 – 1.5 times the volume in July 2010. In the medium term, annual MSW generation will increase to 1518.1 thousand tonnes by 2015 at an average growth rate of 10%. In the long term, a large volume of MSW will be output annually and will increase to 2486.3 thousand tonnes by 2020 – 2.5 times the value for 2010. The hybrid model proposed in this paper can enable decision makers to

  12. A hybrid procedure for MSW generation forecasting at multiple time scales in Xiamen City, China

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Lilai, E-mail: llxu@iue.ac.cn [Key Lab of Urban Environment and Health, Institute of Urban Environment, Chinese Academy of Sciences, 1799 Jimei Road, Xiamen 361021 (China); Xiamen Key Lab of Urban Metabolism, Xiamen 361021 (China); Gao, Peiqing, E-mail: peiqing15@yahoo.com.cn [Xiamen City Appearance and Environmental Sanitation Management Office, 51 Hexiangxi Road, Xiamen 361004 (China); Cui, Shenghui, E-mail: shcui@iue.ac.cn [Key Lab of Urban Environment and Health, Institute of Urban Environment, Chinese Academy of Sciences, 1799 Jimei Road, Xiamen 361021 (China); Xiamen Key Lab of Urban Metabolism, Xiamen 361021 (China); Liu, Chun, E-mail: xmhwlc@yahoo.com.cn [Xiamen City Appearance and Environmental Sanitation Management Office, 51 Hexiangxi Road, Xiamen 361004 (China)

    2013-06-15

    Highlights: ► We propose a hybrid model that combines seasonal SARIMA model and grey system theory. ► The model is robust at multiple time scales with the anticipated accuracy. ► At month-scale, the SARIMA model shows good representation for monthly MSW generation. ► At medium-term time scale, grey relational analysis could yield the MSW generation. ► At long-term time scale, GM (1, 1) provides a basic scenario of MSW generation. - Abstract: Accurate forecasting of municipal solid waste (MSW) generation is crucial and fundamental for the planning, operation and optimization of any MSW management system. Comprehensive information on waste generation for month-scale, medium-term and long-term time scales is especially needed, considering the necessity of MSW management upgrade facing many developing countries. Several existing models are available but of little use in forecasting MSW generation at multiple time scales. The goal of this study is to propose a hybrid model that combines the seasonal autoregressive integrated moving average (SARIMA) model and grey system theory to forecast MSW generation at multiple time scales without needing to consider other variables such as demographics and socioeconomic factors. To demonstrate its applicability, a case study of Xiamen City, China was performed. Results show that the model is robust enough to fit and forecast seasonal and annual dynamics of MSW generation at month-scale, medium- and long-term time scales with the desired accuracy. In the month-scale, MSW generation in Xiamen City will peak at 132.2 thousand tonnes in July 2015 – 1.5 times the volume in July 2010. In the medium term, annual MSW generation will increase to 1518.1 thousand tonnes by 2015 at an average growth rate of 10%. In the long term, a large volume of MSW will be output annually and will increase to 2486.3 thousand tonnes by 2020 – 2.5 times the value for 2010. The hybrid model proposed in this paper can enable decision makers to

  13. Multijunction Photovoltaic Technologies for High-Performance Concentrators: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    McConnell, R.; Symko-Davies, M.

    2006-05-01

    Multijunction solar cells provide high-performance technology pathways leading to potentially low-cost electricity generated from concentrated sunlight. The National Center for Photovoltaics at the National Renewable Energy Laboratory has funded different III-V multijunction solar cell technologies and various solar concentration approaches. Within this group of projects, III-V solar cell efficiencies of 41% are close at hand and will likely be reported in these conference proceedings. Companies with well-developed solar concentrator structures foresee installed system costs of $3/watt--half of today's costs--within the next 2 to 5 years as these high-efficiency photovoltaic technologies are incorporated into their concentrator photovoltaic systems. These technology improvements are timely as new large-scale multi-megawatt markets, appropriate for high performance PV concentrators, open around the world.

  14. LAMMPS strong scaling performance optimization on Blue Gene/Q

    Energy Technology Data Exchange (ETDEWEB)

    Coffman, Paul; Jiang, Wei; Romero, Nichols A.

    2014-11-12

    LAMMPS "Large-scale Atomic/Molecular Massively Parallel Simulator" is an open-source molecular dynamics package from Sandia National Laboratories. Significant performance improvements in strong-scaling and time-to-solution for this application on IBM's Blue Gene/Q have been achieved through computational optimizations of the OpenMP versions of the short-range Lennard-Jones term of the CHARMM force field and the long-range Coulombic interaction implemented with the PPPM (particle-particle-particle mesh) algorithm, enhanced by runtime parameter settings controlling thread utilization. Additionally, MPI communication performance improvements were made to the PPPM calculation by re-engineering the parallel 3D FFT to use MPICH collectives instead of point-to-point. Performance testing was done using an 8.4-million atom simulation scaling up to 16 racks on the Mira system at Argonne Leadership Computing Facility (ALCF). Speedups resulting from this effort were in some cases over 2x.

  15. A comprehensive and quantitative exploration of thousands of viral genomes

    Science.gov (United States)

    Mahmoudabadi, Gita

    2018-01-01

    The complete assembly of viral genomes from metagenomic datasets (short genomic sequences gathered from environmental samples) has proven to be challenging, so there are significant blind spots when we view viral genomes through the lens of metagenomics. One approach to overcoming this problem is to leverage the thousands of complete viral genomes that are publicly available. Here we describe our efforts to assemble a comprehensive resource that provides a quantitative snapshot of viral genomic trends – such as gene density, noncoding percentage, and abundances of functional gene categories – across thousands of viral genomes. We have also developed a coarse-grained method for visualizing viral genome organization for hundreds of genomes at once, and have explored the extent of the overlap between bacterial and bacteriophage gene pools. Existing viral classification systems were developed prior to the sequencing era, so we present our analysis in a way that allows us to assess the utility of the different classification systems for capturing genomic trends. PMID:29624169

  16. Intermediate-Scale High-Solids Anaerobic Digestion System Operational Development

    Energy Technology Data Exchange (ETDEWEB)

    Rivard, C. J.

    1995-02-01

    Anaerobic bioconversion of solid organic wastes represents a disposal option in which two useful products may be produced, including a medium Btu fuel gas (biogas) and a compost-quality organic residue. The application of high-solids technology may offer several advantages over conventional low-solids digester technology. The National Renewable Energy Laboratory (NREL) has developed a unique digester system capable of uniformly mixing high-solids materials at low cost. During the first 1.5 years of operation, a variety of modifications and improvements were instituted to increase the safety, reliability, and performance of the system. Those improvements, which may be critical in further scale-up efforts using ,the NREL high-solids digester design are detailed in this report.

  17. Two thousand wind pumps in the arid region of Brazil

    International Nuclear Information System (INIS)

    Feitosa, E.A.N.; Sampaio, G.M.P.

    1991-01-01

    The North-East part of Brazil is an arid region where water pumping is of vital importance. The main strategy of the Wind Energy Group (Eolica) at the University of Pernambuco is to act as a 'catalyst' between the Brazilian government and the companies involved in wind energy. The company CONESP is a drilling company that is also responsible for choosing the appropriate pumping system and providing maintenance. CONESP already has drilled about 6,000 wells and installed 2,000 conventional windmills with piston pumps. Most of the wells have a very low capacity; thus wind pumps, having a relatively low water pumping capacity, are a suitable solution. However, one of the problems with the installed conventional wind pumps is that the drilled tube wells are not perfectly vertical, resulting in wear of the pump rod. Besides, the maintenance or replacement of the piston pump is time consuming and consequently costly. To reduce operation and maintenance costs, windmills coupled to pneumatic pumps have been developed. Examples are given of air-lift pumps and barc pumps, both using commercially available compressors. The main advantage is that there are no moving parts situated below ground level. Moreover, the windmill does not necessarily have to be placed above the well. Well and windmill can be situated up to 100 metres from each other. The starting torque of this system is also lower than the conventional wind pump. It is concluded that windmills with pneumatic pumps have a relatively low efficiency and higher investment costs compared with windmills coupled to piston pumps. However, CONESP's effort is to optimize the total performance of the pumping system. Due to the lower maintenance costs, pneumatic pumps seem to be a viable alternative to piston pumps. 7 figs., 3 refs

  18. Enabling Structured Exploration of Workflow Performance Variability in Extreme-Scale Environments

    Energy Technology Data Exchange (ETDEWEB)

    Kleese van Dam, Kerstin; Stephan, Eric G.; Raju, Bibi; Altintas, Ilkay; Elsethagen, Todd O.; Krishnamoorthy, Sriram

    2015-11-15

    Workflows are taking an Workflows are taking an increasingly important role in orchestrating complex scientific processes in extreme scale and highly heterogeneous environments. However, to date we cannot reliably predict, understand, and optimize workflow performance. Sources of performance variability and in particular the interdependencies of workflow design, execution environment and system architecture are not well understood. While there is a rich portfolio of tools for performance analysis, modeling and prediction for single applications in homogenous computing environments, these are not applicable to workflows, due to the number and heterogeneity of the involved workflow and system components and their strong interdependencies. In this paper, we investigate workflow performance goals and identify factors that could have a relevant impact. Based on our analysis, we propose a new workflow performance provenance ontology, the Open Provenance Model-based WorkFlow Performance Provenance, or OPM-WFPP, that will enable the empirical study of workflow performance characteristics and variability including complex source attribution.

  19. Transcription Factors Bind Thousands of Active and InactiveRegions in the Drosophila Blastoderm

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xiao-Yong; MacArthur, Stewart; Bourgon, Richard; Nix, David; Pollard, Daniel A.; Iyer, Venky N.; Hechmer, Aaron; Simirenko, Lisa; Stapleton, Mark; Luengo Hendriks, Cris L.; Chu, Hou Cheng; Ogawa, Nobuo; Inwood, William; Sementchenko, Victor; Beaton, Amy; Weiszmann, Richard; Celniker, Susan E.; Knowles, David W.; Gingeras, Tom; Speed, Terence P.; Eisen, Michael B.; Biggin, Mark D.

    2008-01-10

    Identifying the genomic regions bound by sequence-specific regulatory factors is central both to deciphering the complex DNA cis-regulatory code that controls transcription in metazoans and to determining the range of genes that shape animal morphogenesis. Here, we use whole-genome tiling arrays to map sequences bound in Drosophila melanogaster embryos by the six maternal and gap transcription factors that initiate anterior-posterior patterning. We find that these sequence-specific DNA binding proteins bind with quantitatively different specificities to highly overlapping sets of several thousand genomic regions in blastoderm embryos. Specific high- and moderate-affinity in vitro recognition sequences for each factor are enriched in bound regions. This enrichment, however, is not sufficient to explain the pattern of binding in vivo and varies in a context-dependent manner, demonstrating that higher-order rules must govern targeting of transcription factors. The more highly bound regions include all of the over forty well-characterized enhancers known to respond to these factors as well as several hundred putative new cis-regulatory modules clustered near developmental regulators and other genes with patterned expression at this stage of embryogenesis. The new targets include most of the microRNAs (miRNAs) transcribed in the blastoderm, as well as all major zygotically transcribed dorsal-ventral patterning genes, whose expression we show to be quantitatively modulated by anterior-posterior factors. In addition to these highly bound regions, there are several thousand regions that are reproducibly bound at lower levels. However, these poorly bound regions are, collectively, far more distant from genes transcribed in the blastoderm than highly bound regions; are preferentially found in protein-coding sequences; and are less conserved than highly bound regions. Together these observations suggest that many of these poorly-bound regions are not involved in early

  20. Porous Graphene Microflowers for High-Performance Microwave Absorption

    Science.gov (United States)

    Chen, Chen; Xi, Jiabin; Zhou, Erzhen; Peng, Li; Chen, Zichen; Gao, Chao

    2018-06-01

    Graphene has shown great potential in microwave absorption (MA) owing to its high surface area, low density, tunable electrical conductivity and good chemical stability. To fully realize graphene's MA ability, the microstructure of graphene should be carefully addressed. Here we prepared graphene microflowers (Gmfs) with highly porous structure for high-performance MA filler material. The efficient absorption bandwidth (reflection loss ≤ -10 dB) reaches 5.59 GHz and the minimum reflection loss is up to -42.9 dB, showing significant increment compared with stacked graphene. Such performance is higher than most graphene-based materials in the literature. Besides, the low filling content (10 wt%) and low density (40-50 mg cm-3) are beneficial for the practical applications. Without compounding with magnetic materials or conductive polymers, Gmfs show outstanding MA performance with the aid of rational microstructure design. Furthermore, Gmfs exhibit advantages in facile processibility and large-scale production compared with other porous graphene materials including aerogels and foams.

  1. Development of large scale production of Nd-doped phosphate glasses for megajoule-scale laser systems

    International Nuclear Information System (INIS)

    Ficini, G.; Campbell, J.H.

    1996-01-01

    Nd-doped phosphate glasses are the preferred gain medium for high-peak-power lasers used for Inertial Confinement Fusion research because they have excellent energy storage and extraction characteristics. In addition, these glasses can be manufactured defect-free in large sizes and at relatively low cost. To meet the requirements of the future mega-joule size lasers, advanced laser glass manufacturing methods are being developed that would enable laser glass to be continuously produced at the rate of several thousand large (790 x 440 x 44 mm 3 ) plates of glass per year. This represents more than a 10 to 100-fold improvement in the scale of the present manufacturing technology

  2. Lo cyborg en Metrópolis: apuntes sobre el paisaje filosófico de la distopía.

    Directory of Open Access Journals (Sweden)

    Gerardo Vázquez Rodríguez

    2017-01-01

    Full Text Available Resumen. Muchas son las revisiones y lo escrito sobre la película Metrópoli, se acrecentó a través de los años el valor del film en gran medida por la interpretación que hace de un futuro colectivo y de amplia empatía con los momentos sociales y personajes que se evidenciaron a los largo del siglo xx y hasta la fecha. La película promulgo discursos que se volvieron pertinaces para la ciencia ficción y para la proyección de una estética de modernidad a futuro. El presente documento no intenta acrecentar el muy explorado imaginario cinematográfico y estilístico de Metrópoli,  la distinción de la película en el articulo es un pretexto intencionado para articular un escenario que incluya personajes insustanciales en la colectividad y su inconsciente, intentamos qué por medio de los arquetipos expuestos en el film se pueda esclarecer de alguna manera la relación prevista entre los paisajes cyborg y entre quienes lo construyen, lo jerarquizan, lo observan y lo viven; como ruta para conseguir esté cometido se empezaría desde las primeras líneas del presente texto una disertación de conceptos sobre la utopía y la distopía, recorriendo sus matices y utilizándolos como enfoques que nos intentan dar panorama al tema. No obstante que partimos de un análisis neutral sobre la utopía el presente documento se decanta sobre la distinción de la distopía y su funcionalidad para explicar una visión colectiva de la sociedad underground. Esta misma visión albergada en la masividad y no en el individuo nos servirá  como hilo conductor para acrecentar la parte final del texto que intenta postular la creciente y pronosticada relación entre humanidad y maquina.

  3. Large Scale GW Calculations on the Cori System

    Science.gov (United States)

    Deslippe, Jack; Del Ben, Mauro; da Jornada, Felipe; Canning, Andrew; Louie, Steven

    The NERSC Cori system, powered by 9000+ Intel Xeon-Phi processors, represents one of the largest HPC systems for open-science in the United States and the world. We discuss the optimization of the GW methodology for this system, including both node level and system-scale optimizations. We highlight multiple large scale (thousands of atoms) case studies and discuss both absolute application performance and comparison to calculations on more traditional HPC architectures. We find that the GW method is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism across many layers of the system. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, as part of the Computational Materials Sciences Program.

  4. Large-scale runoff generation - parsimonious parameterisation using high-resolution topography

    Science.gov (United States)

    Gong, L.; Halldin, S.; Xu, C.-Y.

    2011-08-01

    World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting at very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TRG only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the

  5. Recent advances in ultra-high performance liquid chromatography for the analysis of traditional chinese medicine

    Science.gov (United States)

    Traditional Chinese medicines (TCMs) have been widely used for the prevention and treatment of various diseases for thousands of years in China. Ultra Performance Liquid Chromatography (UHPLC) is a relatively new technique offering new possibilities in liquid chromatography. This paper reviews recen...

  6. Optimizing the Performance of Reactive Molecular Dynamics Simulations for Multi-core Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Aktulga, Hasan Metin [Michigan State Univ., East Lansing, MI (United States); Coffman, Paul [Argonne National Lab. (ANL), Argonne, IL (United States); Shan, Tzu-Ray [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Knight, Chris [Argonne National Lab. (ANL), Argonne, IL (United States); Jiang, Wei [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-12-01

    Hybrid parallelism allows high performance computing applications to better leverage the increasing on-node parallelism of modern supercomputers. In this paper, we present a hybrid parallel implementation of the widely used LAMMPS/ReaxC package, where the construction of bonded and nonbonded lists and evaluation of complex ReaxFF interactions are implemented efficiently using OpenMP parallelism. Additionally, the performance of the QEq charge equilibration scheme is examined and a dual-solver is implemented. We present the performance of the resulting ReaxC-OMP package on a state-of-the-art multi-core architecture Mira, an IBM BlueGene/Q supercomputer. For system sizes ranging from 32 thousand to 16.6 million particles, speedups in the range of 1.5-4.5x are observed using the new ReaxC-OMP software. Sustained performance improvements have been observed for up to 262,144 cores (1,048,576 processes) of Mira with a weak scaling efficiency of 91.5% in larger simulations containing 16.6 million particles.

  7. Small Scale Mixing Demonstration Batch Transfer and Sampling Performance of Simulated HLW - 12307

    Energy Technology Data Exchange (ETDEWEB)

    Jensen, Jesse; Townson, Paul; Vanatta, Matt [EnergySolutions, Engineering and Technology Group, Richland, WA, 99354 (United States)

    2012-07-01

    The ability to effectively mix, sample, certify, and deliver consistent batches of High Level Waste (HLW) feed from the Hanford Double Shell Tanks (DST) to the Waste treatment Plant (WTP) has been recognized as a significant mission risk with potential to impact mission length and the quantity of HLW glass produced. At the end of 2009 DOE's Tank Operations Contractor, Washington River Protection Solutions (WRPS), awarded a contract to EnergySolutions to design, fabricate and operate a demonstration platform called the Small Scale Mixing Demonstration (SSMD) to establish pre-transfer sampling capacity, and batch transfer performance data at two different scales. This data will be used to examine the baseline capacity for a tank mixed via rotational jet mixers to transfer consistent or bounding batches, and provide scale up information to predict full scale operational performance. This information will then in turn be used to define the baseline capacity of such a system to transfer and sample batches sent to WTP. The Small Scale Mixing Demonstration (SSMD) platform consists of 43'' and 120'' diameter clear acrylic test vessels, each equipped with two scaled jet mixer pump assemblies, and all supporting vessels, controls, services, and simulant make up facilities. All tank internals have been modeled including the air lift circulators (ALCs), the steam heating coil, and the radius between the wall and floor. The test vessels are set up to simulate the transfer of HLW out of a mixed tank, and collect a pre-transfer sample in a manner similar to the proposed baseline configuration. The collected material is submitted to an NQA-1 laboratory for chemical analysis. Previous work has been done to assess tank mixing performance at both scales. This work involved a combination of unique instruments to understand the three dimensional distribution of solids using a combination of Coriolis meter measurements, in situ chord length distribution

  8. Large-scale ground motion simulation using GPGPU

    Science.gov (United States)

    Aoi, S.; Maeda, T.; Nishizawa, N.; Aoki, T.

    2012-12-01

    Huge computation resources are required to perform large-scale ground motion simulations using 3-D finite difference method (FDM) for realistic and complex models with high accuracy. Furthermore, thousands of various simulations are necessary to evaluate the variability of the assessment caused by uncertainty of the assumptions of the source models for future earthquakes. To conquer the problem of restricted computational resources, we introduced the use of GPGPU (General purpose computing on graphics processing units) which is the technique of using a GPU as an accelerator of the computation which has been traditionally conducted by the CPU. We employed the CPU version of GMS (Ground motion Simulator; Aoi et al., 2004) as the original code and implemented the function for GPU calculation using CUDA (Compute Unified Device Architecture). GMS is a total system for seismic wave propagation simulation based on 3-D FDM scheme using discontinuous grids (Aoi&Fujiwara, 1999), which includes the solver as well as the preprocessor tools (parameter generation tool) and postprocessor tools (filter tool, visualization tool, and so on). The computational model is decomposed in two horizontal directions and each decomposed model is allocated to a different GPU. We evaluated the performance of our newly developed GPU version of GMS on the TSUBAME2.0 which is one of the Japanese fastest supercomputer operated by the Tokyo Institute of Technology. First we have performed a strong scaling test using the model with about 22 million grids and achieved 3.2 and 7.3 times of the speed-up by using 4 and 16 GPUs. Next, we have examined a weak scaling test where the model sizes (number of grids) are increased in proportion to the degree of parallelism (number of GPUs). The result showed almost perfect linearity up to the simulation with 22 billion grids using 1024 GPUs where the calculation speed reached to 79.7 TFlops and about 34 times faster than the CPU calculation using the same number

  9. Integration of expression data in genome-scale metabolic network reconstructions

    Directory of Open Access Journals (Sweden)

    Anna S. Blazier

    2012-08-01

    Full Text Available With the advent of high-throughput technologies, the field of systems biology has amassed an abundance of omics data, quantifying thousands of cellular components across a variety of scales, ranging from mRNA transcript levels to metabolite quantities. Methods are needed to not only integrate this omics data but to also use this data to heighten the predictive capabilities of computational models. Several recent studies have successfully demonstrated how flux balance analysis (FBA, a constraint-based modeling approach, can be used to integrate transcriptomic data into genome-scale metabolic network reconstructions to generate predictive computational models. In this review, we summarize such FBA-based methods for integrating expression data into genome-scale metabolic network reconstructions, highlighting their advantages as well as their limitations.

  10. Ultra-large scale synthesis of high electrochemical performance SnO{sub 2} quantum dots within 5 min at room temperature following a growth self-termination mechanism

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Hongtao, E-mail: htcui@ytu.edu.cn; Xue, Junying; Ren, Wanzhong; Wang, Minmin

    2015-10-05

    Highlights: • SnO{sub 2} quantum dots were prepared at an ultra-large scale at room temperature within 5 min. • The grinding of SnCl{sub 2}⋅2H{sub 2}O and ammonium persulphate with morpholine produces quantum dots. • The reactions were self-terminated through the rapid consumption of water. • The obtained SnO{sub 2} quantum dots own high electrochemical performance. - Abstract: SnO{sub 2} quantum dots are prepared at an ultra-large scale by a productive synthetic procedure without using any organic ligand. The grinding of solid mixture of SnCl{sub 2}⋅2H{sub 2}O and ammonium persulphate with morpholine in a mortar at room temperature produces 1.2 nm SnO{sub 2} quantum dots within 5 min. The formation of SnO{sub 2} is initiated by the reaction between tin ions and hydroxyl groups generated from hydrolysis of morpholine in the released hydrate water from SnCl{sub 2}⋅2H{sub 2}O. It is considered that as water is rapidly consumed by the hydrolysis reaction of morpholine, the growth process of particles is self-terminated immediately after their transitory period of nucleation and growth. As a result of simple procedure and high toleration to scaling up of preparation, at least 50 g of SnO{sub 2} quantum dots can be produced in one batch in our laboratory. The as prepared quantum dots present high electrochemical performance due to the effective faradaic reaction and the alternative trapping of electrons and holes.

  11. Implementing High-Performance Geometric Multigrid Solver with Naturally Grained Messages

    Energy Technology Data Exchange (ETDEWEB)

    Shan, H; Williams, S; Zheng, Y; Kamil, A; Yelick, K

    2015-10-26

    Structured-grid linear solvers often require manually packing and unpacking of communication data to achieve high performance.Orchestrating this process efficiently is challenging, labor-intensive, and potentially error-prone.In this paper, we explore an alternative approach that communicates the data with naturally grained messagesizes without manual packing and unpacking. This approach is the distributed analogue of shared-memory programming, taking advantage of the global addressspace in PGAS languages to provide substantial programming ease. However, its performance may suffer from the large number of small messages. We investigate theruntime support required in the UPC ++ library for this naturally grained version to close the performance gap between the two approaches and attain comparable performance at scale using the High-Performance Geometric Multgrid (HPGMG-FV) benchmark as a driver.

  12. Image scale measurement with correlation filters in a volume holographic optical correlator

    Science.gov (United States)

    Zheng, Tianxiang; Cao, Liangcai; He, Qingsheng; Jin, Guofan

    2013-08-01

    A search engine containing various target images or different part of a large scene area is of great use for many applications, including object detection, biometric recognition, and image registration. The input image captured in realtime is compared with all the template images in the search engine. A volume holographic correlator is one type of these search engines. It performs thousands of comparisons among the images at a super high speed, with the correlation task accomplishing mainly in optics. However, the inputted target image always contains scale variation to the filtering template images. At the time, the correlation values cannot properly reflect the similarity of the images. It is essential to estimate and eliminate the scale variation of the inputted target image. There are three domains for performing the scale measurement, as spatial, spectral and time domains. Most methods dealing with the scale factor are based on the spatial or the spectral domains. In this paper, a method with the time domain is proposed to measure the scale factor of the input image. It is called a time-sequential scaled method. The method utilizes the relationship between the scale variation and the correlation value of two images. It sends a few artificially scaled input images to compare with the template images. The correlation value increases and decreases with the increasing of the scale factor at the intervals of 0.8~1 and 1~1.2, respectively. The original scale of the input image can be measured by estimating the largest correlation value through correlating the artificially scaled input image with the template images. The measurement range for the scale can be 0.8~4.8. Scale factor beyond 1.2 is measured by scaling the input image at the factor of 1/2, 1/3 and 1/4, correlating the artificially scaled input image with the template images, and estimating the new corresponding scale factor inside 0.8~1.2.

  13. Case Study: A Picture Worth a Thousand Words? Making a Case for Video Case Studies

    Science.gov (United States)

    Pai, Aditi

    2014-01-01

    A picture, they say, is worth a thousand words. If a mere picture is worth a thousand words, how much more are "moving pictures" or videos worth? The author poses this not merely as a rhetorical question, but because she wishes to make a case for using videos in the traditional case study method. She recommends four main approaches of…

  14. A Classification Framework for Large-Scale Face Recognition Systems

    OpenAIRE

    Zhou, Ziheng; Deravi, Farzin

    2009-01-01

    This paper presents a generic classification framework for large-scale face recognition systems. Within the framework, a data sampling strategy is proposed to tackle the data imbalance when image pairs are sampled from thousands of face images for preparing a training dataset. A modified kernel Fisher discriminant classifier is proposed to make it computationally feasible to train the kernel-based classification method using tens of thousands of training samples. The framework is tested in an...

  15. Density dependence of reactor performance with thermal confinement scalings

    International Nuclear Information System (INIS)

    Stotler, D.P.

    1992-03-01

    Energy confinement scalings for the thermal component of the plasma published thus far have a different dependence on plasma density and input power than do scalings for the total plasma energy. With such thermal scalings, reactor performance (measured by Q, the ratio of the fusion power to the sum of the ohmic and auxiliary input powers) worsens with increasing density. This dependence is the opposite of that found using scalings based on the total plasma energy, indicating that reactor operation concepts may need to be altered if this density dependence is confirmed in future research

  16. Towards a Database System for Large-scale Analytics on Strings

    KAUST Repository

    Sahli, Majed A.

    2015-07-23

    Recent technological advances are causing an explosion in the production of sequential data. Biological sequences, web logs and time series are represented as strings. Currently, strings are stored, managed and queried in an ad-hoc fashion because they lack a standardized data model and query language. String queries are computationally demanding, especially when strings are long and numerous. Existing approaches cannot handle the growing number of strings produced by environmental, healthcare, bioinformatic, and space applications. There is a trade- off between performing analytics efficiently and scaling to thousands of cores to finish in reasonable times. In this thesis, we introduce a data model that unifies the input and output representations of core string operations. We define a declarative query language for strings where operators can be pipelined to form complex queries. A rich set of core string operators is described to support string analytics. We then demonstrate a database system for string analytics based on our model and query language. In particular, we propose the use of a novel data structure augmented by efficient parallel computation to strike a balance between preprocessing overheads and query execution times. Next, we delve into repeated motifs extraction as a core string operation for large-scale string analytics. Motifs are frequent patterns used, for example, to identify biological functionality, periodic trends, or malicious activities. Statistical approaches are fast but inexact while combinatorial methods are sound but slow. We introduce ACME, a combinatorial repeated motifs extractor. We study the spatial and temporal locality of motif extraction and devise a cache-aware search space traversal technique. ACME is the only method that scales to gigabyte- long strings, handles large alphabets, and supports interesting motif types with minimal overhead. While ACME is cache-efficient, it is limited by being serial. We devise a lightweight

  17. Large-scale, high-performance and cloud-enabled multi-model analytics experiments in the context of the Earth System Grid Federation

    Science.gov (United States)

    Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.

    2017-12-01

    The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the

  18. Ongoing hydrothermal heat loss from the 1912 ash-flow sheet, Valley of Ten Thousand Smokes, Alaska

    Science.gov (United States)

    Hogeweg, N.; Keith, T.E.C.; Colvard, E.M.; Ingebritsen, S.E.

    2005-01-01

    The June 1912 eruption of Novarupta filled nearby glacial valleys on the Alaska Peninsula with ash-flow tuff (ignimbrite), and post-eruption observations of thousands of steaming fumaroles led to the name 'Valley of Ten Thousand Smokes' (VTTS). By the late 1980s most fumarolic activity had ceased, but the discovery of thermal springs in mid-valley in 1987 suggested continued cooling of the ash-flow sheet. Data collected at the mid-valley springs between 1987 and 2001 show a statistically significant correlation between maximum observed chloride (Cl) concentration and temperature. These data also show a statistically significant decline in the maximum Cl concentration. The observed variation in stream chemistry across the sheet strongly implies that most solutes, including Cl, originate within the area of the VTTS occupied by the 1912 deposits. Numerous measurements of Cl flux in the Ukak River just below the ash-flow sheet suggest an ongoing heat loss of ???250 MW. This represents one of the largest hydrothermal heat discharges in North America. Other hydrothermal discharges of comparable magnitude are related to heat obtained from silicic magma bodies at depth, and are quasi-steady on a multidecadal time scale. However, the VTTS hydrothermal flux is not obviously related to a magma body and is clearly declining. Available data provide reasonable boundary and initial conditions for simple transient modeling. Both an analytical, conduction-only model and a numerical model predict large rates of heat loss from the sheet 90 years after deposition.

  19. Changes of vegetation vis-a-vis Climate since last several thousand ...

    Indian Academy of Sciences (India)

    IPSITA

    ... since last several thousand years at North-East India ... of agricultural practices showed the presence of cereal and other cultural pollen taxa. Later, a more humid ... research in southeast Asia, and also to trace the migratory route of flora.

  20. Study of the emission performance of high-power klystrons: SLAC XK-5

    International Nuclear Information System (INIS)

    Zhao, Y.

    1981-07-01

    There are hundreds of high power klystrons operated in the Linac gallery and about fifty to sixty tubes fail every year. The lifetime ranges from a few thousand up to seventy thousand hours except those which fail during an early period. The overall percentage of failures due to emission problems is approximately 25%. It is also noted that a 10% increase in mean lifetime of klystrons will reduce the overall cost per hour as much as a 10% increase in efficiency. Therefore, it is useful to find some method to predict the expected life of an individual tube. The final goal has not been attained yet, but some useful information was obtained. It is thought that this information might be helpful for those people who will study this subject further

  1. Study of the emission performance of high-power klystrons: SLAC XK-5

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Y.

    1981-07-01

    There are hundreds of high power klystrons operated in the Linac gallery and about fifty to sixty tubes fail every year. The lifetime ranges from a few thousand up to seventy thousand hours except those which fail during an early period. The overall percentage of failures due to emission problems is approximately 25%. It is also noted that a 10% increase in mean lifetime of klystrons will reduce the overall cost per hour as much as a 10% increase in efficiency. Therefore, it is useful to find some method to predict the expected life of an individual tube. The final goal has not been attained yet, but some useful information was obtained. It is thought that this information might be helpful for those people who will study this subject further.

  2. Comparative study on the growth performance of the hybrid catfish ...

    African Journals Online (AJOL)

    Growth performance of the hybrid catfish Heteroclarias reared in concrete and earthen pond systems were investigated in a 92-day experiment. Experiment was conducted using four rectangular ponds (2 concrete and 2 earthen) each measuring 14 × 6 × 1.5 metres in duplicates. The ponds were uniformly limed, fertilized ...

  3. Effective Rating Scale Development for Speaking Tests: Performance Decision Trees

    Science.gov (United States)

    Fulcher, Glenn; Davidson, Fred; Kemp, Jenny

    2011-01-01

    Rating scale design and development for testing speaking is generally conducted using one of two approaches: the measurement-driven approach or the performance data-driven approach. The measurement-driven approach prioritizes the ordering of descriptors onto a single scale. Meaning is derived from the scaling methodology and the agreement of…

  4. Novel nano materials for high performance logic and memory devices

    Science.gov (United States)

    Das, Saptarshi

    After decades of relentless progress, the silicon CMOS industry is approaching a stall in device performance for both logic and memory devices due to fundamental scaling limitations. In order to reinforce the accelerating pace, novel materials with unique properties are being proposed on an urgent basis. This list includes one dimensional nanotubes, quasi one dimensional nanowires, two dimensional atomistically thin layered materials like graphene, hexagonal boron nitride and the more recently the rich family of transition metal di-chalcogenides comprising of MoS2, WSe2, WS2 and many more for logic applications and organic and inorganic ferroelectrics, phase change materials and magnetic materials for memory applications. Only time will tell who will win, but exploring these novel materials allow us to revisit the fundamentals and strengthen our understanding which will ultimately be beneficial for high performance device design. While there has been growing interest in two-dimensional (2D) crystals other than graphene, evaluating their potential usefulness for electronic applications is still in its infancies due to the lack of a complete picture of their performance potential. The fact that the 2-D layered semiconducting di-chalcogenides need to be connected to the "outside" world in order to capitalize on their ultimate potential immediately emphasizes the importance of a thorough understanding of the contacts. This thesis demonstrate that through a proper understanding and design of source/drain contacts and the right choice of number of MoS2 layers the excellent intrinsic properties of this 2D material can be harvested. A comprehensive experimental study on the dependence of carrier mobility on the layer thickness of back gated multilayer MoS 2 field effect transistors is also provided. A resistor network model that comprises of Thomas-Fermi charge screening and interlayer coupling is used to explain the non-monotonic trend in the extracted field effect

  5. High-level-waste containment for a thousand years: unique technical and research problems

    International Nuclear Information System (INIS)

    Davis, M.S.

    1982-01-01

    In the United States the present policy for disposal of high level nuclear wastes is focused on isolation of solidified wastes in a mined geologic repository. Safe isolation is to be achieved by utilizing both natural and man-made barriers which will act in concert to assure the overall conservative performance of the disposal system. The incorporation of predictable man-made barriers into the waste disposal strategy has generated some new and unique problems for the scientific community

  6. The linearly scaling 3D fragment method for large scale electronic structure calculations

    Energy Technology Data Exchange (ETDEWEB)

    Zhao Zhengji [National Energy Research Scientific Computing Center (NERSC) (United States); Meza, Juan; Shan Hongzhang; Strohmaier, Erich; Bailey, David; Wang Linwang [Computational Research Division, Lawrence Berkeley National Laboratory (United States); Lee, Byounghak, E-mail: ZZhao@lbl.go [Physics Department, Texas State University (United States)

    2009-07-01

    The linearly scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) at OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.

  7. High-throughput micro-scale cultivations and chromatography modeling: Powerful tools for integrated process development.

    Science.gov (United States)

    Baumann, Pascal; Hahn, Tobias; Hubbuch, Jürgen

    2015-10-01

    Upstream processes are rather complex to design and the productivity of cells under suitable cultivation conditions is hard to predict. The method of choice for examining the design space is to execute high-throughput cultivation screenings in micro-scale format. Various predictive in silico models have been developed for many downstream processes, leading to a reduction of time and material costs. This paper presents a combined optimization approach based on high-throughput micro-scale cultivation experiments and chromatography modeling. The overall optimized system must not necessarily be the one with highest product titers, but the one resulting in an overall superior process performance in up- and downstream. The methodology is presented in a case study for the Cherry-tagged enzyme Glutathione-S-Transferase from Escherichia coli SE1. The Cherry-Tag™ (Delphi Genetics, Belgium) which can be fused to any target protein allows for direct product analytics by simple VIS absorption measurements. High-throughput cultivations were carried out in a 48-well format in a BioLector micro-scale cultivation system (m2p-Labs, Germany). The downstream process optimization for a set of randomly picked upstream conditions producing high yields was performed in silico using a chromatography modeling software developed in-house (ChromX). The suggested in silico-optimized operational modes for product capturing were validated subsequently. The overall best system was chosen based on a combination of excellent up- and downstream performance. © 2015 Wiley Periodicals, Inc.

  8. Topic 14+16: High-performance and scientific applications and extreme-scale computing (Introduction)

    KAUST Repository

    Downes, Turlough P.

    2013-01-01

    As our understanding of the world around us increases it becomes more challenging to make use of what we already know, and to increase our understanding still further. Computational modeling and simulation have become critical tools in addressing this challenge. The requirements of high-resolution, accurate modeling have outstripped the ability of desktop computers and even small clusters to provide the necessary compute power. Many applications in the scientific and engineering domains now need very large amounts of compute time, while other applications, particularly in the life sciences, frequently have large data I/O requirements. There is thus a growing need for a range of high performance applications which can utilize parallel compute systems effectively, which have efficient data handling strategies and which have the capacity to utilise current and future systems. The High Performance and Scientific Applications topic aims to highlight recent progress in the use of advanced computing and algorithms to address the varied, complex and increasing challenges of modern research throughout both the "hard" and "soft" sciences. This necessitates being able to use large numbers of compute nodes, many of which are equipped with accelerators, and to deal with difficult I/O requirements. © 2013 Springer-Verlag.

  9. High performance sinter-HIP for hard metals

    International Nuclear Information System (INIS)

    Hongxia Chen; Deming Zhang; Yang Li; Jingping Chen

    2001-01-01

    The horizontal sinter-HIP equipment with great charge capacity and high performance, developed and manufactured by Central Iron and Steel Research Institute(CISRI), is mainly used for sintering and condensation of hard metals. This equipment is characterized by large hot zone, high heating speed, good temperature uniformity and fast cooling system. The equipment can provide uniform hot zone with temperature difference less than 6 o C at 1500-1600 o C and 6-10 MPa by controlling temperature, pressure and circulation of gas precisely. Using large scale horizontal sinter-HIP equipment to produce hard matals have many advantages such as stable quality, high efficiency of production, high rate of finished products and low production cost, so this equipment is a good choice for manufacturer of hard metals. (author)

  10. Large scale cluster computing workshop

    International Nuclear Information System (INIS)

    Dane Skow; Alan Silverman

    2002-01-01

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community

  11. Scale hierarchy in high-temperature QCD

    CERN Document Server

    Akerlund, Oscar

    2013-01-01

    Because of asymptotic freedom, QCD becomes weakly interacting at high temperature: this is the reason for the transition to a deconfined phase in Yang-Mills theory at temperature $T_c$. At high temperature $T \\gg T_c$, the smallness of the running coupling $g$ induces a hierachy betwen the "hard", "soft" and "ultrasoft" energy scales $T$, $g T$ and $g^2 T$. This hierarchy allows for a very successful effective treatment where the "hard" and the "soft" modes are successively integrated out. However, it is not clear how high a temperature is necessary to achieve such a scale hierarchy. By numerical simulations, we show that the required temperatures are extremely high. Thus, the quantitative success of the effective theory down to temperatures of a few $T_c$ appears surprising a posteriori.

  12. Towards high performance processing in modern Java-based control systems

    International Nuclear Information System (INIS)

    Misiowiec, M.; Buczak, W.; Buttner, M.

    2012-01-01

    CERN controls software is often developed on Java foundation. Some systems carry out a combination of data, network and processor intensive tasks within strict time limits. Hence, there is a demand for high performing, quasi real time solutions. The system must handle dozens of thousands of data samples every second, along its three tiers, applying complex computations throughout. To accomplish the goal, a deep understanding of multi-threading, memory management and inter process communication was required. There are unexpected traps hidden behind an excessive use of 64 bit memory or severe impact on the processing flow of modern garbage collectors. Tuning JVM configuration significantly affects the execution of the code. Even more important is the amount of threads and the data structures used between them. Accurately dividing work into independent tasks might boost system performance. Thorough profiling with dedicated tools helped understand the bottlenecks and choose algorithmically optimal solutions. Different virtual machines were tested, in a variety of setups and garbage collection options. The overall work provided for discovering actual hard limits of the whole setup. We present this process of designing a challenging system in view of the characteristics and limitations of the contemporary Java run-time environment. (authors)

  13. Nuclear rockets: High-performance propulsion for Mars

    International Nuclear Information System (INIS)

    Watson, C.W.

    1994-05-01

    A new impetus to manned Mars exploration was introduced by President Bush in his Space Exploration Initiative. This has led, in turn, to a renewed interest in high-thrust nuclear thermal rocket propulsion (NTP). The purpose of this report is to give a brief tutorial introduction to NTP and provide a basic understanding of some of the technical issues in the realization of an operational NTP engine. Fundamental physical principles are outlined from which a variety of qualitative advantages of NTP over chemical propulsion systems derive, and quantitative performance comparisons are presented for illustrative Mars missions. Key technologies are described for a representative solid-core heat-exchanger class of engine, based on the extensive development work in the Rover and NERVA nuclear rocket programs (1955 to 1973). The most driving technology, fuel development, is discussed in some detail for these systems. Essential highlights are presented for the 19 full-scale reactor and engine tests performed in these programs. On the basis of these tests, the practicality of graphite-based nuclear rocket engines was established. Finally, several higher-performance advanced concepts are discussed. These have received considerable attention, but have not, as yet, developed enough credibility to receive large-scale development

  14. New Panorama Reveals More Than a Thousand Black Holes

    Science.gov (United States)

    2007-03-01

    By casting a wide net, astronomers have captured an image of more than a thousand supermassive black holes. These results give astronomers a snapshot of a crucial period when these monster black holes are growing, and provide insight into the environments in which they occur. The new black hole panorama was made with data from NASA's Chandra X-ray Observatory, the Spitzer Space Telescope and ground-based optical telescopes. The black holes in the image are hundreds of millions to several billion times more massive than the sun and lie in the centers of galaxies. X-ray, IR & Optical Composites of Obscured & Unobscured AGN in Bootes Field X-ray, IR & Optical Composites of Obscured & Unobscured AGN in Bootes Field Material falling into these black holes at high rates generates huge amounts of light that can be detected in different wavelengths. These systems are known as active galactic nuclei, or AGN. "We're trying to get a complete census across the Universe of black holes and their habits," said Ryan Hickox of the Harvard-Smithsonian Center for Astrophysics (CfA) in Cambridge, Mass. "We used special tactics to hunt down the very biggest black holes." Instead of staring at one relatively small part of the sky for a long time, as with the Chandra Deep Fields -- two of the longest exposures obtained with the observatory -- and other concentrated surveys, this team scanned a much bigger portion with shorter exposures. Since the biggest black holes power the brightest AGN, they can be spotted at vast distances, even with short exposures. Scale Chandra Images to Full Moon Scale Chandra Images to Full Moon "With this approach, we found well over a thousand of these monsters, and have started using them to test our understanding of these powerful objects," said co-investigator Christine Jones, also of the CfA. The new survey raises doubts about a popular current model in which a supermassive black hole is surrounded by a doughnut-shaped region, or torus, of gas. An

  15. Module-scale analysis of pressure retarded osmosis: performance limitations and implications for full-scale operation.

    Science.gov (United States)

    Straub, Anthony P; Lin, Shihong; Elimelech, Menachem

    2014-10-21

    We investigate the performance of pressure retarded osmosis (PRO) at the module scale, accounting for the detrimental effects of reverse salt flux, internal concentration polarization, and external concentration polarization. Our analysis offers insights on optimization of three critical operation and design parameters--applied hydraulic pressure, initial feed flow rate fraction, and membrane area--to maximize the specific energy and power density extractable in the system. For co- and counter-current flow modules, we determine that appropriate selection of the membrane area is critical to obtain a high specific energy. Furthermore, we find that the optimal operating conditions in a realistic module can be reasonably approximated using established optima for an ideal system (i.e., an applied hydraulic pressure equal to approximately half the osmotic pressure difference and an initial feed flow rate fraction that provides equal amounts of feed and draw solutions). For a system in counter-current operation with a river water (0.015 M NaCl) and seawater (0.6 M NaCl) solution pairing, the maximum specific energy obtainable using performance properties of commercially available membranes was determined to be 0.147 kWh per m(3) of total mixed solution, which is 57% of the Gibbs free energy of mixing. Operating to obtain a high specific energy, however, results in very low power densities (less than 2 W/m(2)), indicating that the trade-off between power density and specific energy is an inherent challenge to full-scale PRO systems. Finally, we quantify additional losses and energetic costs in the PRO system, which further reduce the net specific energy and indicate serious challenges in extracting net energy in PRO with river water and seawater solution pairings.

  16. Management issues for high performance storage systems

    Energy Technology Data Exchange (ETDEWEB)

    Louis, S. [Lawrence Livermore National Lab., CA (United States); Burris, R. [Oak Ridge National Lab., TN (United States)

    1995-03-01

    Managing distributed high-performance storage systems is complex and, although sharing common ground with traditional network and systems management, presents unique storage-related issues. Integration technologies and frameworks exist to help manage distributed network and system environments. Industry-driven consortia provide open forums where vendors and users cooperate to leverage solutions. But these new approaches to open management fall short addressing the needs of scalable, distributed storage. We discuss the motivation and requirements for storage system management (SSM) capabilities and describe how SSM manages distributed servers and storage resource objects in the High Performance Storage System (HPSS), a new storage facility for data-intensive applications and large-scale computing. Modem storage systems, such as HPSS, require many SSM capabilities, including server and resource configuration control, performance monitoring, quality of service, flexible policies, file migration, file repacking, accounting, and quotas. We present results of initial HPSS SSM development including design decisions and implementation trade-offs. We conclude with plans for follow-on work and provide storage-related recommendations for vendors and standards groups seeking enterprise-wide management solutions.

  17. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    Science.gov (United States)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  18. Pressurized gasification solves many problems. IVOSDIG process for peat, wood and sludge

    Energy Technology Data Exchange (ETDEWEB)

    Heinonen, O.; Repo, A.

    1996-11-01

    Research is now being done on one of the essential elements of pressurized gasification: the feeding of fuel into high pressure. At the IVOSDIG pilot plant in Jyvaeskylae, a pilot-scale piston feeder for peat, wood and sludge has been tested. A piston feeder achieves pressurization through the movement of the piston, not by inert pressurization gas. The feeder cylinder then turns 180 degrees to another position, and the piston forces the fuel contained in the cylinder into the pressure vessel, which is at the process pressure. The feeder has to cylinders; one is filled while the other is being emptied. In pilot-scale tests, the capacity of the feeder is ten cubic metres of fuel per hour. The commercial-scale feeder has been designed for a capacity of fifty cubic metres per hour. The feeder operates hydraulically, and the hydraulic system can be assembled from commercially available components. IVO began development work to devise a feeder based on the piston technique in 1992. During 1993, short tests were performed with the pilot-scale feeder. Tests under real conditions were begun during 1994 at the laboratory of VTT Energy in Jyvaeskylae, which houses the IVOSDIG pressurized gasification pilot plant for moist fuels developed by IVO

  19. Performance Assessment of Full-Scale Wastewater Treatment Plants Based on Seasonal Variability of Microbial Communities via High-Throughput Sequencing.

    Directory of Open Access Journals (Sweden)

    Tang Liu

    Full Text Available Microbial communities of activated sludge (AS play a key role in the performance of wastewater treatment processes. However, seasonal variability of microbial population in varying AS-based processes has been poorly correlated with operation of full-scale wastewater treatment systems (WWTSs. In this paper, significant seasonal variability of AS microbial communities in eight WWTSs located in the city of Guangzhou were revealed in terms of 16S rRNA-based Miseq sequencing. Furthermore, variation redundancy analysis (RDA demonstrated that the microbial community compositions closely correlated with WWTS operation parameters such as temperature, BOD, NH4+-N and TN. Consequently, support vector regression models which reasonably predicted effluent BOD, SS and TN in WWTSs were established based on microbial community compositions. This work provided an alternative tool for rapid assessment on performance of full-scale wastewater treatment plants.

  20. Using a Malcolm Baldrige framework to understand high-performing clinical microsystems.

    Science.gov (United States)

    Foster, Tina C; Johnson, Julie K; Nelson, Eugene C; Batalden, Paul B

    2007-10-01

    BACKGROUND, OBJECTIVES AND METHOD: The Malcolm Baldrige National Quality Award (MBNQA) provides a set of criteria for organisational quality assessment and improvement that has been used by thousands of business, healthcare and educational organisations for more than a decade. The criteria can be used as a tool for self-evaluation, and are widely recognised as a robust framework for design and evaluation of healthcare systems. The clinical microsystem, as an organisational construct, is a systems approach for providing clinical care based on theories from organisational development, leadership and improvement. This study compared the MBNQA criteria for healthcare and the success factors of high-performing clinical microsystems to (1) determine whether microsystem success characteristics cover the same range of issues addressed by the Baldrige criteria and (2) examine whether this comparison might better inform our understanding of either framework. Both Baldrige criteria and microsystem success characteristics cover a wide range of areas crucial to high performance. Those particularly called out by this analysis are organisational leadership, work systems and service processes from a Baldrige standpoint, and leadership, performance results, process improvement, and information and information technology from the microsystem success characteristics view. Although in many cases the relationship between Baldrige criteria and microsystem success characteristics are obvious, in others the analysis points to ways in which the Baldrige criteria might be better understood and worked with by a microsystem through the design of work systems and a deep understanding of processes. Several tools are available for those who wish to engage in self-assessment based on MBNQA criteria and microsystem characteristics.

  1. High-Resiliency and Auto-Scaling of Large-Scale Cloud Computing for OCO-2 L2 Full Physics Processing

    Science.gov (United States)

    Hua, H.; Manipon, G.; Starch, M.; Dang, L. B.; Southam, P.; Wilson, B. D.; Avis, C.; Chang, A.; Cheng, C.; Smyth, M.; McDuffie, J. L.; Ramirez, P.

    2015-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as SWOT and NISAR where data volumes and data throughput rates are order of magnitude larger than present day missions. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. We present our experiences on deploying a hybrid-cloud computing science data system (HySDS) for the OCO-2 Science Computing Facility to support large-scale processing of their Level-2 full physics data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer ~10X costs savings but with an unpredictable computing environment based on market forces. We will present how we enabled high-tolerance computing in order to achieve large-scale computing as well as operational cost savings.

  2. On dark matter selected high-scale supersymmetry

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Sibo [Department of Physics, Chongqing University,Chongqing 401331 (China)

    2015-03-11

    The prediction for the Higgs mass in the dark matter selected high-scale SUSY is explored. We show the bounds on SUSY-breaking scale in models of SM +w-tilde and SM +h-tilde/s-tilde due to the observed Higgs mass at the LHC. We propose that effective theory below scale m-tilde described by SM +w-tilde is possibly realized in gauge mediation with multiple spurion fields that exhibit significant mass hierarchy, and that by SM +h-tilde/s-tilde can be realized with direct singlet-messenger-messenger coupling for singlet Yukawa coupling λ∼(v/m-tilde){sup 1/2}g{sub SM}. Finally, the constraint on high-scale SUSY is investigated in the light of inflation physics if these two subjects are directly related.

  3. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    Energy Technology Data Exchange (ETDEWEB)

    Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brightwell, Ron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In this paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.

  4. Development of high performance readout ASICs for silicon photomultipliers (SiPMs)

    International Nuclear Information System (INIS)

    Shen, Wei

    2012-01-01

    Silicon Photomultipliers (SiPMs) are novel kind of solid state photon detectors with extremely high photon detection resolution. They are composed of hundreds or thousands of avalanche photon diode pixels connected in parallel. These avalanche photon diodes are operated in Geiger Mode. SiPMs have the same magnitude of multiplication gain compared to the conventional photomultipliers (PMTs). Moreover, they have a lot of advantages such as compactness, relatively low bias voltage and magnetic field immunity etc. Special readout electronics are required to preserve the high performance of the detector. KLauS and STiC are two CMOS ASIC chips designed in particular for SiPMs. KLauS is used for SiPM charge readout applications. Since SiPMs have a much larger detector capacitance compared to other solid state photon detectors such as PIN diodes and APDs, a few special techniques are used inside the chip to make sure a descent signal to noise ratio for pixel charge signal can be obtained. STiC is a chip dedicated to SiPM time-of-flight applications. High bandwidth and low jitter design schemes are mandatory for such applications where time jitter less than tens of picoseconds is required. Design schemes and error analysis as well as measurement results are presented in the thesis.

  5. Relevant Spatial Scales of Chemical Variation in Aplysina aerophoba

    Directory of Open Access Journals (Sweden)

    Oriol Sacristan-Soriano

    2011-11-01

    Full Text Available Understanding the scale at which natural products vary the most is critical because it sheds light on the type of factors that regulate their production. The sponge Aplysina aerophoba is a common Mediterranean sponge inhabiting shallow waters in the Mediterranean and its area of influence in Atlantic Ocean. This species contains large concentrations of brominated alkaloids (BAs that play a number of ecological roles in nature. Our research investigates the ecological variation in BAs of A. aerophoba from a scale of hundred of meters to thousand kilometers. We used a nested design to sample sponges from two geographically distinct regions (Canary Islands and Mediterranean, over 2500 km, with two zones within each region (less than 50 km, two locations within each zone (less than 5 km, and two sites within each location (less than 500 m. We used high-performance liquid chromatography to quantify multiple BAs and a spectrophotometer to quantify chlorophyll a (Chl a. Our results show a striking degree of variation in both natural products and Chl a content. Significant variation in Chl a content occurred at the largest and smallest geographic scales. The variation patterns of BAs also occurred at the largest and smallest scales, but varied depending on which BA was analyzed. Concentrations of Chl a and isofistularin-3 were negatively correlated, suggesting that symbionts may impact the concentration of some of these compounds. Our results underline the complex control of the production of secondary metabolites, with factors acting at both small and large geographic scales affecting the production of multiple secondary metabolites.

  6. Big Data solutions on a small scale: Evaluating accessible high-performance computing for social research

    OpenAIRE

    Murthy, Dhiraj; Bowman, S. A.

    2014-01-01

    Though full of promise, Big Data research success is often contingent on access to the newest, most advanced, and often expensive hardware systems and the expertise needed to build and implement such systems. As a result, the accessibility of the growing number of Big Data-capable technology solutions has often been the preserve of business analytics. Pay as you store/process services like Amazon Web Services have opened up possibilities for smaller scale Big Data projects. There is high dema...

  7. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Chase Qishi [New Jersey Inst. of Technology, Newark, NJ (United States); Univ. of Memphis, TN (United States); Zhu, Michelle Mengxia [Southern Illinois Univ., Carbondale, IL (United States)

    2016-06-06

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models feature diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific

  8. Intelligent Facades for High Performance Green Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Dyson, Anna [Rensselaer Polytechnic Inst., Troy, NY (United States)

    2017-03-01

    Progress Towards Net-Zero and Net-Positive-Energy Commercial Buildings and Urban Districts Through Intelligent Building Envelope Strategies Previous research and development of intelligent facades systems has been limited in their contribution towards national goals for achieving on-site net zero buildings, because this R&D has failed to couple the many qualitative requirements of building envelopes such as the provision of daylighting, access to exterior views, satisfying aesthetic and cultural characteristics, with the quantitative metrics of energy harvesting, storage and redistribution. To achieve energy self-sufficiency from on-site solar resources, building envelopes can and must address this gamut of concerns simultaneously. With this project, we have undertaken a high-performance building integrated combined-heat and power concentrating photovoltaic system with high temperature thermal capture, storage and transport towards multiple applications (BICPV/T). The critical contribution we are offering with the Integrated Concentrating Solar Façade (ICSF) is conceived to improve daylighting quality for improved health of occupants and mitigate solar heat gain while maximally capturing and transferring onsite solar energy. The ICSF accomplishes this multi-functionality by intercepting only the direct-normal component of solar energy (which is responsible for elevated cooling loads) thereby transforming a previously problematic source of energy into a high quality resource that can be applied to building demands such as heating, cooling, dehumidification, domestic hot water, and possible further augmentation of electrical generation through organic Rankine cycles. With the ICSF technology, our team is addressing the global challenge in transitioning commercial and residential building stock towards on-site clean energy self-sufficiency, by fully integrating innovative environmental control systems strategies within an intelligent and responsively dynamic building

  9. Design and Implementation of High-Performance GIS Dynamic Objects Rendering Engine

    Science.gov (United States)

    Zhong, Y.; Wang, S.; Li, R.; Yun, W.; Song, G.

    2017-12-01

    Spatio-temporal dynamic visualization is more vivid than static visualization. It important to use dynamic visualization techniques to reveal the variation process and trend vividly and comprehensively for the geographical phenomenon. To deal with challenges caused by dynamic visualization of both 2D and 3D spatial dynamic targets, especially for different spatial data types require high-performance GIS dynamic objects rendering engine. The main approach for improving the rendering engine with vast dynamic targets relies on key technologies of high-performance GIS, including memory computing, parallel computing, GPU computing and high-performance algorisms. In this study, high-performance GIS dynamic objects rendering engine is designed and implemented for solving the problem based on hybrid accelerative techniques. The high-performance GIS rendering engine contains GPU computing, OpenGL technology, and high-performance algorism with the advantage of 64-bit memory computing. It processes 2D, 3D dynamic target data efficiently and runs smoothly with vast dynamic target data. The prototype system of high-performance GIS dynamic objects rendering engine is developed based SuperMap GIS iObjects. The experiments are designed for large-scale spatial data visualization, the results showed that the high-performance GIS dynamic objects rendering engine have the advantage of high performance. Rendering two-dimensional and three-dimensional dynamic objects achieve 20 times faster on GPU than on CPU.

  10. [Quality of sleep and academic performance in high school students].

    Science.gov (United States)

    Bugueño, Maithe; Curihual, Carolina; Olivares, Paulina; Wallace, Josefa; López-AlegrÍa, Fanny; Rivera-López, Gonzalo; Oyanedel, Juan Carlos

    2017-09-01

    Sleeping and studying are the day-to-day activities of a teenager attending school. To determine the quality of sleep and its relationship to the academic performance among students attending morning and afternoon shifts in a public high school. Students of the first and second year of high school answered an interview about socio-demographic background, academic performance, student activities and subjective sleep quality; they were evaluated using the Pittsburgh Sleep Quality Index (PSQI). The interview was answered by 322 first year students aged 15 ± 5 years attending the morning shift and 364 second year students, aged 16 ± 0.5 years, attending the afternoon shift. The components: sleep latency, habitual sleep efficiency, sleep disturbance, drug use and daytime dysfunction were similar and classified as good in both school shifts. The components subjective sleep quality and duration of sleep had higher scores among students of the morning shift. The mean grades during the first semester of the students attending morning and afternoon shifts were 5.9 and 5.8, respectively (of a scale from 1 to 7). Among students of both shifts, the PSQI scale was associated inversely and significantly with academic performance. A bad sleep quality influences academic performance in these students.

  11. Globalização, metrópoles e crise social no Brasil

    Directory of Open Access Journals (Sweden)

    Inaiá Maria Moreira de Carvalho

    2006-05-01

    Full Text Available Este artigo discute como o processo de reestruturação produtiva e a articulação da economia brasileira ao processo de globalização vem impactando sobre as grandes metrópoles do país, onde se concentram as atividades produtivas, a riqueza e o poder, ao lado da população. Para tanto, o texto considera como o processo em apreço tem contribuído para a redefinição de territórios, a conformação de novas arquiteturas produtivas e urbanas e mudanças nas condições ocupacionais e sociais. Assinala, a seguir, como esses fenômenos vem ocorrendo nas áreas metropolitanas brasileiras, com o avanço da segmentação urbana, da precariedade ocupacional, da vulnerabilidade e do desemprego, transformando-as no epicentro da crise social do BrasilThis paper discusses the way in which productive restructuration process and Brazilian economy articulation are impacting the main metropolis of the country, where productive activities, wealth and power are concentrated besides the population. In doing so, the text considers how these processes have contributed to a territorial redefinition, as well as to a conformation of new productive and urban architectures and changes in occupational and social conditions. It is also pointed out how these phenomena are taking place in Brazilian metropolitan areas, with the advance of urban segmentation, labor fragility, vulnerability and unemployment, turning these areas into the core of social crisis in Brazil

  12. Uma Idéia de Metrópole no Século XIX

    Directory of Open Access Journals (Sweden)

    Ricardo Marques de Azevedo

    1998-01-01

    Full Text Available Nos séculos XVII e XVIII consolidam-se as cidades capitais das soberanias absolutistas. Com a Revolução Industrial, a migração de camponeses e as agitações políticas, em fins do século XVIII e inícios do XIX algumas dessas cidades, principalmente Londres e Paris, alçam-se à condição de metrópoles. Este artigo procura mostrar como em Paris _ a mais cosmopolita das cidades na época _ inauguram-se novos comportamentos, modas, modos e até mesmo uma nova gestualidade. Aponta também as representações que na literatura e nas ciências humanas se elaboram sobre esse novo modo de vida no qual certos vanguardistas de inícios deste século vislumbram a germinação de uma nova sensibilidade.In XVII and XVIIIth Century the capital cities of absolutist sovereignities are consolidated. After Industrial Revolution, the countrymen migration and political disturbance, at the end of the XVIII and beggining of XIXth Century some of these cities, specially London and Paris, raise to a metropolis condition. This article attempts to show how in Paris _ the most cosmopolitan city at that time _ new behaviors, fashions, manners and even new gestures emerge. The text also names the representations which in litterature and human sciences are elaborated on this new way of life on which certain avant-gardists from the beggining of this Century glimpse the germination of a new sensibility.

  13. High-Temperature Structural Analysis of a Small-Scale Prototype of a Process Heat Exchanger (IV) - Macroscopic High-Temperature Elastic-Plastic Analysis -

    International Nuclear Information System (INIS)

    Song, Kee Nam; Hong, Sung Deok; Park, Hong Yoon

    2011-01-01

    A PHE (Process Heat Exchanger) is a key component required to transfer heat energy of 950 .deg. C generated in a VHTR (Very High Temperature Reactor) to a chemical reaction that yields a large quantity of hydrogen. A small-scale PHE prototype made of Hastelloy-X was scheduled for testing in a small-scale gas loop at the Korea Atomic Energy Research Institute. In this study, as a part of the evaluation of the high-temperature structural integrity of the PHE prototype, high-temperature structural analysis modeling, and macroscopic thermal and elastic-plastic structural analysis of the PHE prototype were carried out under the gas-loop test conditions as a preliminary qwer123$ study before carrying out the performance test in the gas loop. The results obtained in this study will be used to design the performance test setup for the modified PHE prototype

  14. Small-scale demonstration of high-level radioactive waste processing and solidification using actual SRP waste

    International Nuclear Information System (INIS)

    Okeson, J.K.; Galloway, R.M.; Wilhite, E.L.; Woolsey, G.B.; Ferguson, R.B.

    1980-01-01

    A small-scale demonstration of the high-level radioactive waste solidification process by vitrification in borosilicate glass is being conducted using 5-6 liter batches of actual waste. Equipment performance and processing characteristics of the various unit operations in the process are reported and, where appropriate, are compared to large-scale results obtained with synthetic waste

  15. Joint Multi-scale Convolution Neural Network for Scene Classification of High Resolution Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    ZHENG Zhuo

    2018-05-01

    Full Text Available High resolution remote sensing imagery scene classification is important for automatic complex scene recognition, which is the key technology for military and disaster relief, etc. In this paper, we propose a novel joint multi-scale convolution neural network (JMCNN method using a limited amount of image data for high resolution remote sensing imagery scene classification. Different from traditional convolutional neural network, the proposed JMCNN is an end-to-end training model with joint enhanced high-level feature representation, which includes multi-channel feature extractor, joint multi-scale feature fusion and Softmax classifier. Multi-channel and scale convolutional extractors are used to extract scene middle features, firstly. Then, in order to achieve enhanced high-level feature representation in a limit dataset, joint multi-scale feature fusion is proposed to combine multi-channel and scale features using two feature fusions. Finally, enhanced high-level feature representation can be used for classification by Softmax. Experiments were conducted using two limit public UCM and SIRI datasets. Compared to state-of-the-art methods, the JMCNN achieved improved performance and great robustness with average accuracies of 89.3% and 88.3% on the two datasets.

  16. Performance Validation and Scaling of a Capillary Membrane Solid-Liquid Separation System

    Energy Technology Data Exchange (ETDEWEB)

    Rogers, S; Cook, J; Juratovac, J; Goodwillie, J; Burke, T

    2011-10-25

    Algaeventure Systems (AVS) has previously demonstrated an innovative technology for dewatering algae slurries that dramatically reduces energy consumption by utilizing surface physics and capillary action. Funded by a $6M ARPA-E award, transforming the original Harvesting, Dewatering and Drying (HDD) prototype machine into a commercially viable technology has required significant attention to material performance, integration of sensors and control systems, and especially addressing scaling issues that would allow processing extreme volumes of algal cultivation media/slurry. Decoupling the harvesting, dewatering and drying processes, and addressing the rate limiting steps for each of the individual steps has allowed for the development individual technologies that may be tailored to the specific needs of various cultivation systems. The primary performance metric used by AVS to assess the economic viability of its Solid-Liquid Separation (SLS) dewatering technology is algae mass production rate as a function of power consumption (cost), cake solids/moisture content, and solids capture efficiency. An associated secondary performance metric is algae mass loading rate which is dependent on hydraulic loading rate, area-specific hydraulic processing capacity (gpm/in2), filter:capillary belt contact area, and influent algae concentration. The system is capable of dewatering 4 g/L (0.4%) algae streams to solids concentrations up to 30% with capture efficiencies of 80+%, however mass production is highly dependent on average cell size (which determines filter mesh size and percent open area). This paper will present data detailing the scaling efforts to date. Characterization and performance data for novel membranes, as well as optimization of off-the-shelf filter materials will be examined. Third party validation from Ohio University on performance and operating cost, as well as design modification suggestions will be discussed. Extrapolation of current productivities

  17. Leveraging the Thousands of Known Planets to Inform TESS Follow-Up

    Science.gov (United States)

    Ballard, Sarah

    2017-10-01

    The Solar System furnishes our most familiar planetary architecture: many planets, orbiting nearly coplanar to one another. However, a typical system of planets in the Milky Way orbits a much smaller M dwarf star, and these stars furnish a different blueprint in key ways than the conditions that nourished evolution of life on Earth. With ensemble studies of hundreds-to-thousands of exoplanets, I will describe the emerging links between planet formation from disks, orbital dynamics of planets, and the content and observability of planetary atmospheres. These quantities can be tied to observables even in discovery light curves, to enable judicious selection of follow-up targets from the ground and from space. After TESS exoplanet discoveries start in earnest, the studies of individual planets with large, space-based platforms comprise the clear next step toward understanding the hospitability of the Milky Way to life. Our success hinges upon leveraging the many thousands of planet discoveries in hand to determine how to use these precious and limited resources.

  18. Large-scale runoff generation – parsimonious parameterisation using high-resolution topography

    Directory of Open Access Journals (Sweden)

    L. Gong

    2011-08-01

    Full Text Available World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting at very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TRG only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm

  19. Deepwater drilling; Jakten paa de store dyp

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-07-01

    Recent technological development has made it possible to drill for oil and gas at the impressive depth of 3000 metres. An increasing part of the world's oil and gas discoveries are made in deep or ultra deep waters. Ultra deep waters are those exceeding 1500 metres. Since drilling at more than 500 metres started at the end of the 1970s, 32 discoveries of about 500 million barrels of extractable oil or gas have been made. These finds amount to almost 60 thousand millions barrels of oil equivalents. Most of the effort has been made in the coasts between Brazil, West Africa and the Gulf of Mexico. Deepwater projects have been a field of priority for Norwegian oil companies in their search for international commissions. It is frequently time-consuming, expensive and technologically challenging to drill at great depths. The article describes the Atlantis concept, which may reduce the complexities and costs of deepwater activities. This involves making an artificial sea bottom, which in the form of an air-filled buoy is anchored at a depth of 200 - 300 metres. Production wells or exploration wells and risers are extended from the real bottom to the artificial one.

  20. High performance cellular level agent-based simulation with FLAME for the GPU.

    Science.gov (United States)

    Richmond, Paul; Walker, Dawn; Coakley, Simon; Romano, Daniela

    2010-05-01

    Driven by the availability of experimental data and ability to simulate a biological scale which is of immediate interest, the cellular scale is fast emerging as an ideal candidate for middle-out modelling. As with 'bottom-up' simulation approaches, cellular level simulations demand a high degree of computational power, which in large-scale simulations can only be achieved through parallel computing. The flexible large-scale agent modelling environment (FLAME) is a template driven framework for agent-based modelling (ABM) on parallel architectures ideally suited to the simulation of cellular systems. It is available for both high performance computing clusters (www.flame.ac.uk) and GPU hardware (www.flamegpu.com) and uses a formal specification technique that acts as a universal modelling format. This not only creates an abstraction from the underlying hardware architectures, but avoids the steep learning curve associated with programming them. In benchmarking tests and simulations of advanced cellular systems, FLAME GPU has reported massive improvement in performance over more traditional ABM frameworks. This allows the time spent in the development and testing stages of modelling to be drastically reduced and creates the possibility of real-time visualisation for simple visual face-validation.

  1. ATLAS Grid Workflow Performance Optimization

    CERN Document Server

    Elmsheuser, Johannes; The ATLAS collaboration

    2018-01-01

    The CERN ATLAS experiment grid workflow system manages routinely 250 to 500 thousand concurrently running production and analysis jobs to process simulation and detector data. In total more than 300 PB of data is distributed over more than 150 sites in the WLCG. At this scale small improvements in the software and computing performance and workflows can lead to significant resource usage gains. ATLAS is reviewing together with CERN IT experts several typical simulation and data processing workloads for potential performance improvements in terms of memory and CPU usage, disk and network I/O. All ATLAS production and analysis grid jobs are instrumented to collect many performance metrics for detailed statistical studies using modern data analytics tools like ElasticSearch and Kibana. This presentation will review and explain the performance gains of several ATLAS simulation and data processing workflows and present analytics studies of the ATLAS grid workflows.

  2. A microneedle array able to inject tens of thousands of cells simultaneously

    Science.gov (United States)

    Teichert, Gregory H.; Burnett, Sandra; Jensen, Brian D.

    2013-09-01

    This paper presents a biological microelectromechanical system for injecting foreign particles into thousands of cells simultaneously. The system inserts an array of microneedles into a monolayer of cells, and the foreign particles enter the cells by diffusion. The needle array is fabricated using a series of deep reactive ion etches and produces about 4 million needles that average 1 μm in diameter and 8 μm in length with 10 μm spacing. The insertion of the needles is controlled through a compliant suspension. The compliant suspension was designed to provide for needle motion into the cells while restraining rotations or transverse motions that could result in tearing of the cell membranes. Testing was performed using propidium iodide, a membrane impermeable dye, injected into HeLa cells. Average cell survivability was found to be 97.7%, and up to 97.9% of the surviving cells received the propidium iodide.

  3. A microneedle array able to inject tens of thousands of cells simultaneously

    International Nuclear Information System (INIS)

    Teichert, Gregory H; Jensen, Brian D; Burnett, Sandra

    2013-01-01

    This paper presents a biological microelectromechanical system for injecting foreign particles into thousands of cells simultaneously. The system inserts an array of microneedles into a monolayer of cells, and the foreign particles enter the cells by diffusion. The needle array is fabricated using a series of deep reactive ion etches and produces about 4 million needles that average 1 μm in diameter and 8 μm in length with 10 μm spacing. The insertion of the needles is controlled through a compliant suspension. The compliant suspension was designed to provide for needle motion into the cells while restraining rotations or transverse motions that could result in tearing of the cell membranes. Testing was performed using propidium iodide, a membrane impermeable dye, injected into HeLa cells. Average cell survivability was found to be 97.7%, and up to 97.9% of the surviving cells received the propidium iodide. (paper)

  4. Development of performance assessment methodology for nuclear waste isolation in geologic media

    International Nuclear Information System (INIS)

    Bonano, E.J.; Chu, M.S.Y.; Cranwell, R.M.; Davis, P.A.

    1986-01-01

    The analysis of the processes involved in the burial of nuclear wastes can be performed only with reliable mathematical models and computer codes as opposed to conducting experiments because the time scales associated are on the order of tens of thousands of years. These analyses are concerned primarily with the migration of radioactive contaminants from the repository to the environment accessible to humans. Modeling of this phenomenon depends on a large number of other phenomena taking place in the geologic porous and/or fractured medium. These are ground-water flow, physicochemical interactions of the contaminants with the rock, heat transfer, and mass transport. Once the radionuclides have reached the accessible environment, the pathways to humans and health effects are estimated. A performance assessment methodology for a potential high-level waste repository emplaced in a basalt formation has been developed for the US Nuclear Regulatory Commission

  5. Spring comes for ATLAS

    CERN Multimedia

    Butin, F.

    2004-01-01

    (First published in the CERN weekly bulletin 24/2004, 7 June 2004.) A short while ago the ATLAS cavern underwent a spring clean, marking the end of the installation of the detector's support structures and the cavern's general infrastructure. The list of infrastructure to be installed in the ATLAS cavern from September 2003 was long: a thousand tonnes of mechanical structures spread over 13 storeys, two lifts, two 65-tonne overhead travelling cranes 25 metres above cavern floor, with a telescopic boom and cradle to access the remaining 10 metres of the cavern, a ventilation system for the 55 000 cubic metre cavern, a drainage system, a standard sprinkler system and an innovative foam fire-extinguishing system, as well as the external cryogenic system for the superconducting magnets and the liquid argon calorimeters (comprising, amongst other things, two helium refrigeration units, a nitrogen refrigeration unit and 5 km of piping for gaseous or liquid helium and nitrogen), not to mention the handling eq...

  6. High performance homes

    DEFF Research Database (Denmark)

    Beim, Anne; Vibæk, Kasper Sánchez

    2014-01-01

    Can prefabrication contribute to the development of high performance homes? To answer this question, this chapter defines high performance in more broadly inclusive terms, acknowledging the technical, architectural, social and economic conditions under which energy consumption and production occur....... Consideration of all these factors is a precondition for a truly integrated practice and as this chapter demonstrates, innovative project delivery methods founded on the manufacturing of prefabricated buildings contribute to the production of high performance homes that are cost effective to construct, energy...

  7. Performance of a pilot-scale, steam-blown, pressurized fluidized bed biomass gasifier

    Science.gov (United States)

    Sweeney, Daniel Joseph

    With the discovery of vast fossil resources, and the subsequent development of the fossil fuel and petrochemical industry, the role of biomass-based products has declined. However, concerns about the finite and decreasing amount of fossil and mineral resources, in addition to health and climate impacts of fossil resource use, have elevated interest in innovative methods for converting renewable biomass resources into products that fit our modern lifestyle. Thermal conversion through gasification is an appealing method for utilizing biomass due to its operability using a wide variety of feedstocks at a wide range of scales, the product has a variety of uses (e.g., transportation fuel production, electricity production, chemicals synthesis), and in many cases, results in significantly lower greenhouse gas emissions. In spite of the advantages of gasification, several technical hurdles have hindered its commercial development. A number of studies have focused on laboratory-scale and atmospheric biomass gasification. However, few studies have reported on pilot-scale, woody biomass gasification under pressurized conditions. The purpose of this research is an assessment of the performance of a pilot-scale, steam-blown, pressurized fluidized bed biomass gasifier. The 200 kWth fluidized bed gasifier is capable of operation using solid feedstocks at feedrates up to 65 lb/hr, bed temperatures up to 1600°F, and pressures up to 8 atm. Gasifier performance was assessed under various temperatures, pressure, and feedstock (untreated woody biomass, dark and medium torrefied biomass) conditions by measuring product gas yield and composition, residue (e.g., tar and char) production, and mass and energy conversion efficiencies. Elevated temperature and pressure, and feedstock pretreatment were shown to have a significant influence on gasifier operability, tar production, carbon conversion, and process efficiency. High-pressure and temperature gasification of dark torrefied biomass

  8. High-K Strategy Scale: A Measure of the High-K Independent Criterion of Fitness

    Directory of Open Access Journals (Sweden)

    Cezar Giosan

    2006-01-01

    Full Text Available The present study aimed at testing whether factors documented in the literature as being indicators of a high-K reproductive strategy have effects on fitness in extant humans. A 26-item High-K Strategy Scale comprising these factors was developed and tested on 250 respondents. Items tapping into health and attractiveness, upward mobility, social capital and risks consideration, were included in the scale. As expected, the scale showed a significant correlation with perceived offspring quality and a weak, but significant association with actual number of children. The scale had a high reliability coefficient (Cronbach's Alpha = .92. Expected correlations were found between the scale and number of medical diagnoses, education, perceived social support, and number of previous marriages, strengthening the scale's construct validity. Implications of the results are discussed.

  9. Scaling of black silicon processing time by high repetition rate femtosecond lasers

    Directory of Open Access Journals (Sweden)

    Nava Giorgio

    2013-11-01

    Full Text Available Surface texturing of silicon substrates is performed by femtosecond laser irradiation at high repetition rates. Various fabrication parameters are optimized in order to achieve very high absorptance in the visible region from the micro-structured silicon wafer as compared to the unstructured one. A 70-fold reduction of the processing time is demonstrated by increasing the laser repetition rate from 1 kHz to 200 kHz. Further scaling up to 1 MHz can be foreseen.

  10. High performance Li3V2(PO4)3/C composite cathode material for lithium ion batteries studied in pilot scale test

    International Nuclear Information System (INIS)

    Chen Zhenyu; Dai Changsong; Wu Gang; Nelson, Mark; Hu Xinguo; Zhang Ruoxin; Liu Jiansheng; Xia Jicai

    2010-01-01

    Li 3 V 2 (PO 4 ) 3 /C composite cathode material was synthesized via carbothermal reduction process in a pilot scale production test using battery grade raw materials with the aim of studying the feasibility for their practical applications. XRD, FT-IR, XPS, CV, EIS and battery charge-discharge tests were used to characterize the as-prepared material. The XRD and FT-IR data suggested that the as-prepared Li 3 V 2 (PO 4 ) 3 /C material exhibits an orderly monoclinic structure based on the connectivity of PO 4 tetrahedra and VO 6 octahedra. Half cell tests indicated that an excellent high-rate cyclic performance was achieved on the Li 3 V 2 (PO 4 ) 3 /C cathodes in the voltage range of 3.0-4.3 V, retaining a capacity of 95% (96 mAh/g) after 100 cycles at 20C discharge rate. The low-temperature performance of the cathode was further evaluated, showing 0.5C discharge capacity of 122 and 119 mAh/g at -25 and -40 o C, respectively. The discharge capacity of graphite//Li 3 V 2 (PO 4 ) 3 batteries with a designed battery capacity of 14 Ah is as high as 109 mAh/g with a capacity retention of 92% after 224 cycles at 2C discharge rates. The promising high-rate and low-temperature performance observed in this work suggests that Li 3 V 2 (PO 4 ) 3 /C is a very strong candidate to be a cathode in a next-generation Li-ion battery for electric vehicle applications.

  11. High-resolution wavefront control of high-power laser systems

    International Nuclear Information System (INIS)

    Brase, J.; Brown, C.; Carrano, C.; Kartz, M.; Olivier, S.; Pennington, D.; Silva, D.

    1999-01-01

    Nearly every new large-scale laser system application at LLNL has requirements for beam control which exceed the current level of available technology. For applications such as inertial confinement fusion, laser isotope separation, laser machining, and laser the ability to transport significant power to a target while maintaining good beam quality is critical. There are many ways that laser wavefront quality can be degraded. Thermal effects due to the interaction of high-power laser or pump light with the internal optical components or with the ambient gas are common causes of wavefront degradation. For many years, adaptive optics based on thing deformable glass mirrors with piezoelectric or electrostrictive actuators have be used to remove the low-order wavefront errors from high-power laser systems. These adaptive optics systems have successfully improved laser beam quality, but have also generally revealed additional high-spatial-frequency errors, both because the low-order errors have been reduced and because deformable mirrors have often introduced some high-spatial-frequency components due to manufacturing errors. Many current and emerging laser applications fall into the high-resolution category where there is an increased need for the correction of high spatial frequency aberrations which requires correctors with thousands of degrees of freedom. The largest Deformable Mirrors currently available have less than one thousand degrees of freedom at a cost of approximately $1M. A deformable mirror capable of meeting these high spatial resolution requirements would be cost prohibitive. Therefore a new approach using a different wavefront control technology is needed. One new wavefront control approach is the use of liquid-crystal (LC) spatial light modulator (SLM) technology for the controlling the phase of linearly polarized light. Current LC SLM technology provides high-spatial-resolution wavefront control, with hundreds of thousands of degrees of freedom, more

  12. Performance of particle in cell methods on highly concurrent computational architectures

    International Nuclear Information System (INIS)

    Adams, M.F.; Ethier, S.; Wichmann, N.

    2009-01-01

    Particle in cell (PIC) methods are effective in computing Vlasov-Poisson system of equations used in simulations of magnetic fusion plasmas. PIC methods use grid based computations, for solving Poisson's equation or more generally Maxwell's equations, as well as Monte-Carlo type methods to sample the Vlasov equation. The presence of two types of discretizations, deterministic field solves and Monte-Carlo methods for the Vlasov equation, pose challenges in understanding and optimizing performance on today large scale computers which require high levels of concurrency. These challenges arises from the need to optimize two very different types of processes and the interactions between them. Modern cache based high-end computers have very deep memory hierarchies and high degrees of concurrency which must be utilized effectively to achieve good performance. The effective use of these machines requires maximizing concurrency by eliminating serial or redundant work and minimizing global communication. A related issue is minimizing the memory traffic between levels of the memory hierarchy because performance is often limited by the bandwidths and latencies of the memory system. This paper discusses some of the performance issues, particularly in regard to parallelism, of PIC methods. The gyrokinetic toroidal code (GTC) is used for these studies and a new radial grid decomposition is presented and evaluated. Scaling of the code is demonstrated on ITER sized plasmas with up to 16K Cray XT3/4 cores.

  13. Performance of particle in cell methods on highly concurrent computational architectures

    International Nuclear Information System (INIS)

    Adams, M F; Ethier, S; Wichmann, N

    2007-01-01

    Particle in cell (PIC) methods are effective in computing Vlasov-Poisson system of equations used in simulations of magnetic fusion plasmas. PIC methods use grid based computations, for solving Poisson's equation or more generally Maxwell's equations, as well as Monte-Carlo type methods to sample the Vlasov equation. The presence of two types of discretizations, deterministic field solves and Monte-Carlo methods for the Vlasov equation, pose challenges in understanding and optimizing performance on today large scale computers which require high levels of concurrency. These challenges arises from the need to optimize two very different types of processes and the interactions between them. Modern cache based high-end computers have very deep memory hierarchies and high degrees of concurrency which must be utilized effectively to achieve good performance. The effective use of these machines requires maximizing concurrency by eliminating serial or redundant work and minimizing global communication. A related issue is minimizing the memory traffic between levels of the memory hierarchy because performance is often limited by the bandwidths and latencies of the memory system. This paper discusses some of the performance issues, particularly in regard to parallelism, of PIC methods. The gyrokinetic toroidal code (GTC) is used for these studies and a new radial grid decomposition is presented and evaluated. Scaling of the code is demonstrated on ITER sized plasmas with up to 16K Cray XT3/4 cores

  14. COCAP - A compact carbon dioxide analyser for airborne platforms

    Science.gov (United States)

    Kunz, Martin; Lavrič, Jošt V.; Jeschag, Wieland; Bryzgalov, Maksym; Hök, Bertil; Heimann, Martin

    2014-05-01

    Airborne platforms are a valuable tool for atmospheric trace gas measurements due to their capability of movement in three dimensions, covering spatial scales from metres to thousands of kilometres. Although crewed research aircraft are flexible in payload and range, their use is limited by high initial and operating costs. Small unmanned aerial vehicles (UAV) have the potential for substantial cost reduction, but require lightweight, miniaturized and energy-efficient scientific equipment. We are developing a COmpact Carbon dioxide analyser for Airborne Platforms (COCAP). It contains a non-dispersive infrared CO2sensor with a nominal full scale of 3000 μmol/mol. Sampled air is dried with magnesium perchlorate before it enters the sensor. This enables measurement of the dry air mole fraction of CO2, as recommended by the World Meteorological Organization. During post-processing, the CO2 measurement is corrected for temperature and pressure variations in the gas line. Allan variance analysis shows that we achieve a precision of better than 0.4 μmol/mol for 10 s averaging time. We plan to monitor the analyser's stability during flight by measuring reference air from a miniature gas tank in regular intervals. Besides CO2, COCAP measures relative humidity, temperature and pressure of ambient air. An on-board GPS receiver delivers accurate timestamps and allows georeferencing. Data is both stored on a microSD card and simultaneously transferred over a wireless serial interface to a ground station for real-time review. The target weight for COCAP is less than 1 kg. We deploy COCAP on a commercially available fixed-wing UAV (Bormatec Explorer) with a wingspan of 2.2 metres. The UAV has high payload capacity (2.5 kg) as well as sufficient space in the fuselage (80x80x600 mm3). It is built from a shock-resistant foam material, which allows quick repair of minor damages in the field. In case of severe damage spare parts are readily available. Calculations suggest that the

  15. Nucleon electric dipole moments in high-scale supersymmetric models

    International Nuclear Information System (INIS)

    Hisano, Junji; Kobayashi, Daiki; Kuramoto, Wataru; Kuwahara, Takumi

    2015-01-01

    The electric dipole moments (EDMs) of electron and nucleons are promising probes of the new physics. In generic high-scale supersymmetric (SUSY) scenarios such as models based on mixture of the anomaly and gauge mediations, gluino has an additional contribution to the nucleon EDMs. In this paper, we studied the effect of the CP-violating gluon Weinberg operator induced by the gluino chromoelectric dipole moment in the high-scale SUSY scenarios, and we evaluated the nucleon and electron EDMs in the scenarios. We found that in the generic high-scale SUSY models, the nucleon EDMs may receive the sizable contribution from the Weinberg operator. Thus, it is important to compare the nucleon EDMs with the electron one in order to discriminate among the high-scale SUSY models.

  16. Nucleon electric dipole moments in high-scale supersymmetric models

    Energy Technology Data Exchange (ETDEWEB)

    Hisano, Junji [Kobayashi-Maskawa Institute for the Origin of Particles and the Universe (KMI),Nagoya University,Nagoya 464-8602 (Japan); Department of Physics, Nagoya University,Nagoya 464-8602 (Japan); Kavli IPMU (WPI), UTIAS, University of Tokyo,Kashiwa, Chiba 277-8584 (Japan); Kobayashi, Daiki; Kuramoto, Wataru; Kuwahara, Takumi [Department of Physics, Nagoya University,Nagoya 464-8602 (Japan)

    2015-11-12

    The electric dipole moments (EDMs) of electron and nucleons are promising probes of the new physics. In generic high-scale supersymmetric (SUSY) scenarios such as models based on mixture of the anomaly and gauge mediations, gluino has an additional contribution to the nucleon EDMs. In this paper, we studied the effect of the CP-violating gluon Weinberg operator induced by the gluino chromoelectric dipole moment in the high-scale SUSY scenarios, and we evaluated the nucleon and electron EDMs in the scenarios. We found that in the generic high-scale SUSY models, the nucleon EDMs may receive the sizable contribution from the Weinberg operator. Thus, it is important to compare the nucleon EDMs with the electron one in order to discriminate among the high-scale SUSY models.

  17. Transport in JET high performance plasmas

    International Nuclear Information System (INIS)

    2001-01-01

    Two type of high performance scenarios have been produced in JET during DTE1 campaign. One of them is the well known and extensively used in the past ELM-free hot ion H-mode scenario which has two distinct regions- plasma core and the edge transport barrier. The results obtained during DTE-1 campaign with D, DT and pure T plasmas confirms our previous conclusion that the core transport scales as a gyroBohm in the inner half of plasma volume, recovers its Bohm nature closer to the separatrix and behaves as ion neoclassical in the transport barrier. Measurements on the top of the barrier suggest that the width of the barrier is dependent upon isotope and moreover suggest that fast ions play a key role. The other high performance scenario is a relatively recently developed Optimised Shear Scenario with small or slightly negative magnetic shear in plasma core. Different mechanisms of Internal Transport Barrier (ITB) formation have been tested by predictive modelling and the results are compared with experimentally observed phenomena. The experimentally observed non-penetration of the heavy impurities through the strong ITB which contradicts to a prediction of the conventional neo-classical theory is discussed. (author)

  18. Transport in JET high performance plasmas

    International Nuclear Information System (INIS)

    1999-01-01

    Two type of high performance scenarios have been produced in JET during DTE1 campaign. One of them is the well known and extensively used in the past ELM-free hot ion H-mode scenario which has two distinct regions- plasma core and the edge transport barrier. The results obtained during DTE-1 campaign with D, DT and pure T plasmas confirms our previous conclusion that the core transport scales as a gyroBohm in the inner half of plasma volume, recovers its Bohm nature closer to the separatrix and behaves as ion neoclassical in the transport barrier. Measurements on the top of the barrier suggest that the width of the barrier is dependent upon isotope and moreover suggest that fast ions play a key role. The other high performance scenario is a relatively recently developed Optimised Shear Scenario with small or slightly negative magnetic shear in plasma core. Different mechanisms of Internal Transport Barrier (ITB) formation have been tested by predictive modelling and the results are compared with experimentally observed phenomena. The experimentally observed non-penetration of the heavy impurities through the strong ITB which contradicts to a prediction of the conventional neo-classical theory is discussed. (author)

  19. Cross-cultural Study of Understanding of Scale and Measurement: Does the everyday use of US customary units disadvantage US students?

    Science.gov (United States)

    Delgado, Cesar

    2013-06-01

    Following a sociocultural perspective, this study investigates how students who have grown up using the SI (Système International d'Unités) (metric) or US customary (USC) systems of units for everyday use differ in their knowledge of scale and measurement. Student groups were similar in terms of socioeconomic status, curriculum, native language transparency of number word structure, type of school, and makeup by gender and grade level, while varying by native system of measurement. Their performance on several tasks was compared using binary logistic regression, ordinal logistic regression, and analysis of variance, with gender and grade level as covariates. Participants included 17 USC-native and 89 SI-native students in a school in Mexico, and 31 USC-native students in a school in the Midwestern USA. SI-native students performed at a significantly higher level estimating the length of a metre and a conceptual task (coordinating relative size and absolute size). No statistically significant differences were found on tasks involving factual knowledge about objects or units, scale construction, or estimation of other units. USC-native students in the US school performed at a higher level on smallest known object. These findings suggest that the more transparent SI system better supports conceptual thinking about scale and measurement than the idiosyncratic USC system. Greater emphasis on the SI system and more complete adoption of the SI system for everyday life may improve understanding among US students. Advancing sociocultural theory, systems of units were found to mediate learner's understanding of scale and measurement, much as number words mediate counting and problem solving.

  20. High speed ink aggregates are ejected from tattoos during Q-switched Nd:YAG laser treatments.

    Science.gov (United States)

    Murphy, Michael J

    2018-03-25

    Dark material has been observed embedded within glass slides following Q-switched Nd:YAG laser treatment of tattoos. It appears that these fragments are ejected at high speed from the skin during the treatment. Light microscopic analysis of the slides reveals aggregates of dark fragmented material, presumably tattoo ink, with evidence of fractured/melted glass. Photomicrographs reveal that the sizes of these aggregates are in the range 12 μm to 0.5 mm. Tattoo ink fragments were clearly observed on the surface and embedded within glass slides. Surface aggregates were observed as a fine dust and were easily washed off while deeper fragments remained in situ. The embedded fragments were not visible to the unaided eye. Some fragments appeared to have melted yielding an "insect-like" appearance. These were found to be located between approximately 0.2 and 1 mm deep in the glass. Given the particle masses and kinetic energies attained by some of these aggregates their velocities, when leaving the skin, may be hundreds to thousands of metres per second. However, the masses of the aggregates are minuscule meaning that laser operators may be subjected to these high-speed aggregates without their knowledge. These high-speed fragments of ink may pose a contamination risk to laser operators. Lasers Surg. Med. 9999:1-7, 2018. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  1. Validation of SCALE for High Temperature Gas-Cooled Reactors Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ilas, Germina [ORNL; Ilas, Dan [ORNL; Kelly, Ryan P [ORNL; Sunny, Eva E [ORNL

    2012-08-01

    This report documents verification and validation studies carried out to assess the performance of the SCALE code system methods and nuclear data for modeling and analysis of High Temperature Gas-Cooled Reactor (HTGR) configurations. Validation data were available from the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhE Handbook), prepared by the International Reactor Physics Experiment Evaluation Project, for two different HTGR designs: prismatic and pebble bed. SCALE models have been developed for HTTR, a prismatic fuel design reactor operated in Japan and HTR-10, a pebble bed reactor operated in China. The models were based on benchmark specifications included in the 2009, 2010, and 2011 releases of the IRPhE Handbook. SCALE models for the HTR-PROTEUS pebble bed configuration at the PROTEUS critical facility in Switzerland have also been developed, based on benchmark specifications included in a 2009 IRPhE draft benchmark. The development of the SCALE models has involved a series of investigations to identify particular issues associated with modeling the physics of HTGRs and to understand and quantify the effect of particular modeling assumptions on calculation-to-experiment comparisons.

  2. Performance Analysis and Scaling Behavior of the Terrestrial Systems Modeling Platform TerrSysMP in Large-Scale Supercomputing Environments

    Science.gov (United States)

    Kollet, S. J.; Goergen, K.; Gasper, F.; Shresta, P.; Sulis, M.; Rihani, J.; Simmer, C.; Vereecken, H.

    2013-12-01

    In studies of the terrestrial hydrologic, energy and biogeochemical cycles, integrated multi-physics simulation platforms take a central role in characterizing non-linear interactions, variances and uncertainties of system states and fluxes in reciprocity with observations. Recently developed integrated simulation platforms attempt to honor the complexity of the terrestrial system across multiple time and space scales from the deeper subsurface including groundwater dynamics into the atmosphere. Technically, this requires the coupling of atmospheric, land surface, and subsurface-surface flow models in supercomputing environments, while ensuring a high-degree of efficiency in the utilization of e.g., standard Linux clusters and massively parallel resources. A systematic performance analysis including profiling and tracing in such an application is crucial in the understanding of the runtime behavior, to identify optimum model settings, and is an efficient way to distinguish potential parallel deficiencies. On sophisticated leadership-class supercomputers, such as the 28-rack 5.9 petaFLOP IBM Blue Gene/Q 'JUQUEEN' of the Jülich Supercomputing Centre (JSC), this is a challenging task, but even more so important, when complex coupled component models are to be analysed. Here we want to present our experience from coupling, application tuning (e.g. 5-times speedup through compiler optimizations), parallel scaling and performance monitoring of the parallel Terrestrial Systems Modeling Platform TerrSysMP. The modeling platform consists of the weather prediction system COSMO of the German Weather Service; the Community Land Model, CLM of NCAR; and the variably saturated surface-subsurface flow code ParFlow. The model system relies on the Multiple Program Multiple Data (MPMD) execution model where the external Ocean-Atmosphere-Sea-Ice-Soil coupler (OASIS3) links the component models. TerrSysMP has been instrumented with the performance analysis tool Scalasca and analyzed

  3. A Fault-Tolerant Radiation-Robust Mass Storage Concept for Highly Scaled Flash Memory

    Science.gov (United States)

    Fuchs, Cristian M.; Trinitis, Carsten; Appel, Nicolas; Langer, Martin

    2015-09-01

    Future spacemissions will require vast amounts of data to be stored and processed aboard spacecraft. While satisfying operational mission requirements, storage systems must guarantee data integrity and recover damaged data throughout the mission. NAND-flash memories have become popular for space-borne high performance mass memory scenarios, though future storage concepts will rely upon highly scaled flash or other memory technologies. With modern flash memory, single bit erasure coding and RAID based concepts are insufficient. Thus, a fully run-time configurable, high performance, dependable storage concept, requiring a minimal set of logic or software. The solution is based on composite erasure coding and can be adjusted for altered mission duration or changing environmental conditions.

  4. A Simple Physics-Based Model Predicts Oil Production from Thousands of Horizontal Wells in Shales

    KAUST Repository

    Patzek, Tadeusz

    2017-10-18

    Over the last six years, crude oil production from shales and ultra-deep GOM in the United States has accounted for most of the net increase of global oil production. Therefore, it is important to have a good predictive model of oil production and ultimate recovery in shale wells. Here we introduce a simple model of producing oil and solution gas from the horizontal hydrofractured wells. This model is consistent with the basic physics and geometry of the extraction process. We then apply our model thousands of wells in the Eagle Ford shale. Given well geometry, we obtain a one-dimensional nonlinear pressure diffusion equation that governs flow of mostly oil and solution gas. In principle, solutions of this equation depend on many parameters, but in practice and within a given oil shale, all but three can be fixed at typical values, leading to a nonlinear diffusion problem we linearize and solve exactly with a scaling

  5. Mixing and scale affect moving bed biofilm reactor (MBBR) performance

    NARCIS (Netherlands)

    Kamstra, Andries; Blom, Ewout; Terjesen, Bendik Fyhn

    2017-01-01

    Moving Bed Biofilm Reactors (MBBR) are used increasingly in closed systems for farming of fish. Scaling, i.e. design of units of increasing size, is an important issue in general bio-reactor design since mixing behaviour will differ between small and large scale. Research is mostly performed on

  6. Three-dimensional analysis of free-electron laser performance using brightness scaled variables

    Directory of Open Access Journals (Sweden)

    M. Gullans

    2008-06-01

    Full Text Available A three-dimensional analysis of radiation generation in a free-electron laser (FEL is performed in the small signal regime. The analysis includes beam conditioning, harmonic generation, flat beams, and a new scaling of the FEL equations using the six-dimensional beam brightness. The six-dimensional beam brightness is an invariant under Liouvillian flow; therefore, any nondissipative manipulation of the phase space, performed, for example, in order to optimize FEL performance, must conserve this brightness. This scaling is more natural than the commonly used scaling with the one-dimensional growth rate. The brightness-scaled equations allow for the succinct characterization of the optimal FEL performance under various additional constraints. The analysis allows for the simple evaluation of gain enhancement schemes based on beam phase space manipulations such as emittance exchange and conditioning. An example comparing the gain in the first and third harmonics of round or flat and conditioned or unconditioned beams is presented.

  7. The ALEPH Detector (Apparatus for LEp PHysics)

    CERN Multimedia

    2002-01-01

    ALEPH is a 4$\\pi$ detector designed to give as much detailed information as possible about the complex events produced in high energy $\\mathrm{e}^+\\mathrm{e}^-$ collisions. A superconducting coil 5 metres in diameter and 6 metres long produces a 1.5 Tesla field in the beam direction. Particle detection is accomplished in layers, with each layer performing a particular function.\\\\ \\\\A high resolution vertex detector layers of silicon with double-sided readout provides $r$, $\\phi$ and $z$ coordinates and identifies decay vertices of tau leptons, charm and beauty hadrons. \\\\ \\\\The Inner Tracking Chamber (ITC) is a cylindrical drift chamber with eight axial layers. It gives a high spatial resolution and good track separation, and is also an essential part of the trigger system.\\\\ \\\\The Time Projection Chamber (TPC), 3.6 metres in diameter and 4.4 metres long, measures track momenta and directions. It also provides up to 338 energy loss measurements per track for particle identification. The momentum resolution of...

  8. Dynamic Performance of High Bypass Ratio Turbine Engines With Water Ingestion

    Science.gov (United States)

    Murthy, S. N. B.

    1996-01-01

    The research on dynamic performance of high bypass turbofan engines includes studies on inlets, turbomachinery and the total engine system operating with air-water mixture; the water may be in vapor, droplet, or film form, and their combinations. Prediction codes (WISGS, WINCOF, WINCOF-1, WINCLR, and Transient Engine Performance Code) for performance changes, as well as changes in blade-casing clearance, have been established and demonstrated in application to actual, generic engines. In view of the continuous changes in water distribution in turbomachinery, the performance of both components and the total engine system must be determined in a time-dependent mode; hence, the determination of clearance changes also requires a time-dependent approach. In general, the performance and clearances changes cannot be scaled either with respect to operating or ingestion conditions. Removal of water prior to phase change is the most effective means of avoiding ingestion effects. Sufficient background has been established to perform definitive, full scale tests on a set of components and a complete engine to establish engine control and operability with various air-water vapor-water mixtures.

  9. HIGH PERFORMANCE PHOTOGRAMMETRIC PROCESSING ON COMPUTER CLUSTERS

    Directory of Open Access Journals (Sweden)

    V. N. Adrov

    2012-07-01

    Full Text Available Most cpu consuming tasks in photogrammetric processing can be done in parallel. The algorithms take independent bits as input and produce independent bits as output. The independence of bits comes from the nature of such algorithms since images, stereopairs or small image blocks parts can be processed independently. Many photogrammetric algorithms are fully automatic and do not require human interference. Photogrammetric workstations can perform tie points measurements, DTM calculations, orthophoto construction, mosaicing and many other service operations in parallel using distributed calculations. Distributed calculations save time reducing several days calculations to several hours calculations. Modern trends in computer technology show the increase of cpu cores in workstations, speed increase in local networks, and as a result dropping the price of the supercomputers or computer clusters that can contain hundreds or even thousands of computing nodes. Common distributed processing in DPW is usually targeted for interactive work with a limited number of cpu cores and is not optimized for centralized administration. The bottleneck of common distributed computing in photogrammetry can be in the limited lan throughput and storage performance, since the processing of huge amounts of large raster images is needed.

  10. High-tech industries' overseas investment performance evaluation - Application of data envelopment analysis

    Directory of Open Access Journals (Sweden)

    Ridong Hu

    2013-12-01

    Full Text Available With the rapid change of the social environment, Mainland China has become a new economic market due to the great domestic demand caused by its enormous population and the increasing economic growth rate. Taiwanese businesses have gradually turned to develop in China under the pressure of increasing domestic wages and land costs for expanding factories as well as the enhancement of environmental protection. Mainland China presents the advantages of ample land, low labor costs, monoethnicity, and easy language communication making it an attractive major investment location for Taiwanese high-tech industries. Data Envelopment Analysis (DEA is applied to measure overseas investment efficiency evaluation of Taiwanese high-tech businesses in China, where the Delphi Method is used for selecting the inputs of the number of employees, R&D expenses, and gross sales in total assets. Sensitivity Analysis is further utilized for acquiring the most efficient unit and individual units with operating efficiency. The research results show that 1.Three high-tech businesses that present constant returns to scale perform optimally with overseas investment efficiency 2.Two high-tech companies with decreasing returns to scale appear that they could improve the overseas investment efficiency by decreasing the scale to enhancing the marginal returns, and 3.Sixteen high-tech enterprises reveal increasing returns to scale, showing that they could expand the scale to enhance the marginal returns and further promote efficiency.

  11. Satellite Remote Sensing of Cropland Characteristics in 30m Resolution: The First North American Continental-Scale Classification on High Performance Computing Platforms

    Science.gov (United States)

    Massey, Richard

    Cropland characteristics and accurate maps of their spatial distribution are required to develop strategies for global food security by continental-scale assessments and agricultural land use policies. North America is the major producer and exporter of coarse grains, wheat, and other crops. While cropland characteristics such as crop types are available at country-scales in North America, however, at continental-scale cropland products are lacking at fine sufficient resolution such as 30m. Additionally, applications of automated, open, and rapid methods to map cropland characteristics over large areas without the need of ground samples are needed on efficient high performance computing platforms for timely and long-term cropland monitoring. In this study, I developed novel, automated, and open methods to map cropland extent, crop intensity, and crop types in the North American continent using large remote sensing datasets on high-performance computing platforms. First, a novel method was developed in this study to fuse pixel-based classification of continental-scale Landsat data using Random Forest algorithm available on Google Earth Engine cloud computing platform with an object-based classification approach, recursive hierarchical segmentation (RHSeg) to map cropland extent at continental scale. Using the fusion method, a continental-scale cropland extent map for North America at 30m spatial resolution for the nominal year 2010 was produced. In this map, the total cropland area for North America was estimated at 275.2 million hectares (Mha). This map was assessed for accuracy using randomly distributed samples derived from United States Department of Agriculture (USDA) cropland data layer (CDL), Agriculture and Agri-Food Canada (AAFC) annual crop inventory (ACI), Servicio de Informacion Agroalimentaria y Pesquera (SIAP), Mexico's agricultural boundaries, and photo-interpretation of high-resolution imagery. The overall accuracies of the map are 93.4% with a

  12. Spatial distribution of enzyme driven reactions at micro-scales

    Science.gov (United States)

    Kandeler, Ellen; Boeddinghaus, Runa; Nassal, Dinah; Preusser, Sebastian; Marhan, Sven; Poll, Christian

    2017-04-01

    Studies of microbial biogeography can often provide key insights into the physiologies, environmental tolerances, and ecological strategies of soil microorganisms that dominate in natural environments. In comparison with aquatic systems, soils are particularly heterogeneous. Soil heterogeneity results from the interaction of a hierarchical series of interrelated variables that fluctuate at many different spatial and temporal scales. Whereas spatial dependence of chemical and physical soil properties is well known at scales ranging from decimetres to several hundred metres, the spatial structure of soil enzymes is less clear. Previous work has primarily focused on spatial heterogeneity at a single analytical scale using the distribution of individual cells, specific types of organisms or collective parameters such as bacterial abundance or total microbial biomass. There are fewer studies that have considered variations in community function and soil enzyme activities. This presentation will give an overview about recent studies focusing on spatial pattern of different soil enzymes in the terrestrial environment. Whereas zymography allows the visualization of enzyme pattern in the close vicinity of roots, micro-sampling strategies followed by MUF analyses clarify micro-scale pattern of enzymes associated to specific microhabitats (micro-aggregates, organo-mineral complexes, subsoil compartments).

  13. Scaling earthquake ground motions for performance-based assessment of buildings

    Science.gov (United States)

    Huang, Y.-N.; Whittaker, A.S.; Luco, N.; Hamburger, R.O.

    2011-01-01

    The impact of alternate ground-motion scaling procedures on the distribution of displacement responses in simplified structural systems is investigated. Recommendations are provided for selecting and scaling ground motions for performance-based assessment of buildings. Four scaling methods are studied, namely, (1)geometric-mean scaling of pairs of ground motions, (2)spectrum matching of ground motions, (3)first-mode-period scaling to a target spectral acceleration, and (4)scaling of ground motions per the distribution of spectral demands. Data were developed by nonlinear response-history analysis of a large family of nonlinear single degree-of-freedom (SDOF) oscillators that could represent fixed-base and base-isolated structures. The advantages and disadvantages of each scaling method are discussed. The relationship between spectral shape and a ground-motion randomness parameter, is presented. A scaling procedure that explicitly considers spectral shape is proposed. ?? 2011 American Society of Civil Engineers.

  14. Performance/price estimates for cortex-scale hardware: a design space exploration.

    Science.gov (United States)

    Zaveri, Mazad S; Hammerstrom, Dan

    2011-04-01

    In this paper, we revisit the concept of virtualization. Virtualization is useful for understanding and investigating the performance/price and other trade-offs related to the hardware design space. Moreover, it is perhaps the most important aspect of a hardware design space exploration. Such a design space exploration is a necessary part of the study of hardware architectures for large-scale computational models for intelligent computing, including AI, Bayesian, bio-inspired and neural models. A methodical exploration is needed to identify potentially interesting regions in the design space, and to assess the relative performance/price points of these implementations. As an example, in this paper we investigate the performance/price of (digital and mixed-signal) CMOS and hypothetical CMOL (nanogrid) technology based hardware implementations of human cortex-scale spiking neural systems. Through this analysis, and the resulting performance/price points, we demonstrate, in general, the importance of virtualization, and of doing these kinds of design space explorations. The specific results suggest that hybrid nanotechnology such as CMOL is a promising candidate to implement very large-scale spiking neural systems, providing a more efficient utilization of the density and storage benefits of emerging nano-scale technologies. In general, we believe that the study of such hypothetical designs/architectures will guide the neuromorphic hardware community towards building large-scale systems, and help guide research trends in intelligent computing, and computer engineering. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. High-Performance Data Converters

    DEFF Research Database (Denmark)

    Steensgaard-Madsen, Jesper

    -resolution internal D/A converters are required. Unit-element mismatch-shaping D/A converters are analyzed, and the concept of mismatch-shaping is generalized to include scaled-element D/A converters. Several types of scaled-element mismatch-shaping D/A converters are proposed. Simulations show that, when implemented...... in a standard CMOS technology, they can be designed to yield 100 dB performance at 10 times oversampling. The proposed scaled-element mismatch-shaping D/A converters are well suited for use as the feedback stage in oversampled delta-sigma quantizers. It is, however, not easy to make full use of their potential......-order difference of the output signal from the loop filter's first integrator stage. This technique avoids the need for accurate matching of analog and digital filters that characterizes the MASH topology, and it preserves the signal-band suppression of quantization errors. Simulations show that quantizers...

  16. High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science

    International Nuclear Information System (INIS)

    Pop, Florin

    2014-01-01

    Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP) analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.

  17. Unparticles: Scales and high energy probes

    International Nuclear Information System (INIS)

    Bander, Myron; Feng, Jonathan L.; Rajaraman, Arvind; Shirman, Yuri

    2007-01-01

    Unparticles from hidden conformal sectors provide qualitatively new possibilities for physics beyond the standard model. In the theoretical framework of minimal models, we clarify the relation between energy scales entering various phenomenological analyses. We show that these relations always counteract the effective field theory intuition that higher dimension operators are more highly suppressed, and that the requirement of a significant conformal window places strong constraints on possible unparticle signals. With these considerations in mind, we examine some of the most robust and sensitive probes and explore novel effects of unparticles on gauge coupling evolution and fermion production at high energy colliders. These constraints are presented both as bounds on four-fermion interaction scales and as constraints on the fundamental parameter space of minimal models

  18. Development of High Performance Liquid Chromatography and Mass Spectrometry: a Key Engine of TCM Modernization

    Directory of Open Access Journals (Sweden)

    Zheng-Xiang Zhang

    2015-04-01

    Full Text Available Traditional Chinese Medicine (TCM has been popular for thousand years in prevention and treatment of chronic diseases synergistically with Western medicine while producing mild healing effects and lower side effects. Although many TCMs have been proven effective by modern pharmacological studies and clinical trials, their bioactive constituents and the remedial mechanisms are still not well understood. Researchers have made great efforts to explore the real theory of TCM for many years with different strategies. Development of high performance liquid chromatography (HPLC and mass spectrometry within recent decade can provide scientists with robust technologies for disclosing the mysterious mask of TCM. In this paper, important innovations of HPLC and mass spectrometry are reviewed in the application of TCM analysis from single compound identification to metabolomic strategy.

  19. Lessons Learned in Deploying the World s Largest Scale Lustre File System

    Energy Technology Data Exchange (ETDEWEB)

    Dillow, David A [ORNL; Fuller, Douglas [ORNL; Wang, Feiyi [ORNL; Oral, H Sarp [ORNL; Zhang, Zhe [ORNL; Hill, Jason J [ORNL; Shipman, Galen M [ORNL

    2010-01-01

    The Spider system at the Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) is the world's largest scale Lustre parallel file system. Envisioned as a shared parallel file system capable of delivering both the bandwidth and capacity requirements of the OLCF's diverse computational environment, the project had a number of ambitious goals. To support the workloads of the OLCF's diverse computational platforms, the aggregate performance and storage capacity of Spider exceed that of our previously deployed systems by a factor of 6x - 240 GB/sec, and 17x - 10 Petabytes, respectively. Furthermore, Spider supports over 26,000 clients concurrently accessing the file system, which exceeds our previously deployed systems by nearly 4x. In addition to these scalability challenges, moving to a center-wide shared file system required dramatically improved resiliency and fault-tolerance mechanisms. This paper details our efforts in designing, deploying, and operating Spider. Through a phased approach of research and development, prototyping, deployment, and transition to operations, this work has resulted in a number of insights into large-scale parallel file system architectures, from both the design and the operational perspectives. We present in this paper our solutions to issues such as network congestion, performance baselining and evaluation, file system journaling overheads, and high availability in a system with tens of thousands of components. We also discuss areas of continued challenges, such as stressed metadata performance and the need for file system quality of service alongside with our efforts to address them. Finally, operational aspects of managing a system of this scale are discussed along with real-world data and observations.

  20. Dispersion and Cluster Scales in the Ocean

    Science.gov (United States)

    Kirwan, A. D., Jr.; Chang, H.; Huntley, H.; Carlson, D. F.; Mensa, J. A.; Poje, A. C.; Fox-Kemper, B.

    2017-12-01

    Ocean flow space scales range from centimeters to thousands of kilometers. Because of their large Reynolds number these flows are considered turbulent. However, because of rotation and stratification constraints they do not conform to classical turbulence scaling theory. Mesoscale and large-scale motions are well described by geostrophic or "2D turbulence" theory, however extending this theory to submesoscales has proved to be problematic. One obvious reason is the difficulty in obtaining reliable data over many orders of magnitude of spatial scales in an ocean environment. The goal of this presentation is to provide a preliminary synopsis of two recent experiments that overcame these obstacles. The first experiment, the Grand LAgrangian Deployment (GLAD) was conducted during July 2012 in the eastern half of the Gulf of Mexico. Here approximately 300 GPS-tracked drifters were deployed with the primary goal to determine whether the relative dispersion of an initially densely clustered array was driven by processes acting at local pair separation scales or by straining imposed by mesoscale motions. The second experiment was a component of the LAgrangian Submesoscale Experiment (LASER) conducted during the winter of 2016. Here thousands of bamboo plates were tracked optically from an Aerostat. Together these two deployments provided an unprecedented data set on dispersion and clustering processes from 1 to 106 meter scales. Calculations of statistics such as two point separations, structure functions, and scale dependent relative diffusivities showed: inverse energy cascade as expected for scales above 10 km, a forward energy cascade at scales below 10 km with a possible energy input at Langmuir circulation scales. We also find evidence from structure function calculations for surface flow convergence at scales less than 10 km that account for material clustering at the ocean surface.

  1. Nanometer-scale patterning of high-Tc superconductors for Josephson junction-based digital circuits

    International Nuclear Information System (INIS)

    Wendt, J.R.; Plut, T.A.; Corless, R.F.; Martens, J.S.; Berkowitz, S.; Char, K.; Johansson, M.; Hou, S.Y.; Phillips, J.M.

    1994-01-01

    A straightforward method for nanometer-scale patterning of high-T c superconductor thin films is discussed. The technique combines direct-write electron beam lithography with well-controlled aqueous etches and is applied to the fabrication of Josephson junction nanobridges in high-quality, epitaxial thin-film YBa 2 Cu 3 O 7 . We present the results of our studies of the dimensions, yield, uniformity, and mechanism of the junctions along with the performance of a representative digital circuit based on these junctions. Direct current junction parameter statistics measured at 77 K show critical currents of 27.5 μA±13% for a sample set of 220 junctions. The Josephson behavior of the nanobridge is believed to arise from the aggregation of oxygen vacancies in the nanometer-scale bridge

  2. The development and performance testing of a biodegradable scale inhibitor

    Energy Technology Data Exchange (ETDEWEB)

    Hardy, Julie; Fidoe, Steve; Jones, Chris

    2006-03-15

    The oil industry is currently facing severe restrictions concerning the discharge of oil field chemicals into the environment. Many commonly used materials in both topside and downhole applications are phased for substitution for use in the North Sea, and more will be identified. The development of biodegradable and low toxicity chemicals, which afford equal or improved efficacy, compared to conventional technology, available at a competitive price, is a current industry challenge. A range of biodegradable materials are increasingly available, however their limited performance can result in a restricted range of applications. This paper discusses the development and commercialization of a readily biodegradable scale inhibitor, ideal for use in topside applications. This material offers a broad spectrum of activity, notably efficiency against barium sulphate, calcium sulphate and calcium carbonate scales, in a range of water chemistries. A range of performance testing, compatibility, stability and OCNS dataset will be presented. Comparisons with commonly used chemicals have been made to identify the superior performance of this phosphate ester. This paper will discuss a scale inhibitor suitable for use in a variety of conditions which offers enhanced performance combined with a favourable biodegradation profile. This material is of great benefit to the industry, particularly in North Sea applications. (author) (tk)

  3. Automatic management software for large-scale cluster system

    International Nuclear Information System (INIS)

    Weng Yunjian; Chinese Academy of Sciences, Beijing; Sun Gongxing

    2007-01-01

    At present, the large-scale cluster system faces to the difficult management. For example the manager has large work load. It needs to cost much time on the management and the maintenance of large-scale cluster system. The nodes in large-scale cluster system are very easy to be chaotic. Thousands of nodes are put in big rooms so that some managers are very easy to make the confusion with machines. How do effectively carry on accurate management under the large-scale cluster system? The article introduces ELFms in the large-scale cluster system. Furthermore, it is proposed to realize the large-scale cluster system automatic management. (authors)

  4. Chance performance and floor effects: threats to the validity of the Wechsler Memory Scale--fourth edition designs subtest.

    Science.gov (United States)

    Martin, Phillip K; Schroeder, Ryan W

    2014-06-01

    The Designs subtest allows for accumulation of raw score points by chance alone, creating the potential for artificially inflated performances, especially in older patients. A random number generator was used to simulate the random selection and placement of cards by 100 test naive participants, resulting in a mean raw score of 36.26 (SD = 3.86). This resulted in relatively high-scaled scores in the 45-54, 55-64, and 65-69 age groups on Designs II. In the latter age group, in particular, the mean simulated performance resulted in a scaled score of 7, with scores 1 SD below and above the performance mean translating to scaled scores of 5 and 8, respectively. The findings indicate that clinicians should use caution when interpreting Designs II performance in these age groups, as our simulations demonstrated that low average to average range scores occur frequently when patients are relying solely on chance performance. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. High-resolution time-frequency representation of EEG data using multi-scale wavelets

    Science.gov (United States)

    Li, Yang; Cui, Wei-Gang; Luo, Mei-Lin; Li, Ke; Wang, Lina

    2017-09-01

    An efficient time-varying autoregressive (TVAR) modelling scheme that expands the time-varying parameters onto the multi-scale wavelet basis functions is presented for modelling nonstationary signals and with applications to time-frequency analysis (TFA) of electroencephalogram (EEG) signals. In the new parametric modelling framework, the time-dependent parameters of the TVAR model are locally represented by using a novel multi-scale wavelet decomposition scheme, which can allow the capability to capture the smooth trends as well as track the abrupt changes of time-varying parameters simultaneously. A forward orthogonal least square (FOLS) algorithm aided by mutual information criteria are then applied for sparse model term selection and parameter estimation. Two simulation examples illustrate that the performance of the proposed multi-scale wavelet basis functions outperforms the only single-scale wavelet basis functions or Kalman filter algorithm for many nonstationary processes. Furthermore, an application of the proposed method to a real EEG signal demonstrates the new approach can provide highly time-dependent spectral resolution capability.

  6. Preliminary design of the cooling system for a gas-cooled, high-fluence fast pulsed reactor (HFFPR)

    International Nuclear Information System (INIS)

    Monteith, H.C.

    1978-10-01

    The High-Fluence Fast Pulsed Reactor (HFFPR) is a research reactor concept currently being evaluated as a source for weapon effects experimentation and advanced reactor safety experiments. One of the designs under consideration is a gas-cooled design for testing large-scale weapon hardware or large bundles of full-length, fast reactor fuel pins. This report describes a conceptual cooling system design for such a reactor. The primary coolant would be helium and the secondary coolant would be water. The size of the helium-to-water heat exchanger and the water-to-water heat exchanger will be on the order of 0.9 metre (3 feet) in diameter and 3 metres (10 feet) in length. Analysis indicates that the entire cooling system will easily fit into the existing Sandia Engineering Reactor Facility (SERF) building. The alloy Incoloy 800H appears to be the best candidate for the tube material in the helium-to-water heat exchanger. Type 316 stainless steel has been recommended for the shell of this heat exchanger. Estimates place the cost of the helium-to-water heat exchanger at approximately $100,000, the water-to-water heat exchanger at approximately $25,000, and the helium pump at approximately $450,000. The overall cost of the cooling system will approach $2 million

  7. Constructing large scale SCI-based processing systems by switch elements

    International Nuclear Information System (INIS)

    Wu, B.; Kristiansen, E.; Skaali, B.; Bogaerts, A.; Divia, R.; Mueller, H.

    1993-05-01

    The goal of this paper is to study some of the design criteria for the switch elements to form the interconnection of large scale SCI-based processing systems. The approved IEEE standard 1596 makes it possible to couple up to 64K nodes together. In order to connect thousands of nodes to construct large scale SCI-based processing systems, one has to interconnect these nodes by switch elements to form different topologies. A summary of the requirements and key points of interconnection networks and switches is presented. Two models of the SCI switch elements are proposed. The authors investigate several examples of systems constructed for 4-switches with simulations and the results are analyzed. Some issues and enhancements are discussed to provide the ideas behind the switch design that can improve performance and reduce latency. 29 refs., 11 figs., 3 tabs

  8. Development of performance assessment methodology for nuclear waste isolation in geologic media

    Science.gov (United States)

    Bonano, E. J.; Chu, M. S. Y.; Cranwell, R. M.; Davis, P. A.

    The burial of nuclear wastes in deep geologic formations as a means for their disposal is an issue of significant technical and social impact. The analysis of the processes involved can be performed only with reliable mathematical models and computer codes as opposed to conducting experiments because the time scales associated are on the order of tens of thousands of years. These analyses are concerned primarily with the migration of radioactive contaminants from the repository to the environment accessible to humans. Modeling of this phenomenon depends on a large number of other phenomena taking place in the geologic porous and/or fractured medium. These are ground-water flow, physicochemical interactions of the contaminants with the rock, heat transfer, and mass transport. Once the radionuclides have reached the accessible environment, the pathways to humans and health effects are estimated. A performance assessment methodology for a potential high-level waste repository emplaced in a basalt formation has been developed for the U.S. Nuclear Regulatory Commission.

  9. PetIGA: A framework for high-performance isogeometric analysis

    KAUST Repository

    Dalcin, Lisandro; Collier, N.; Vignal, Philippe; Cortes, Adriano Mauricio; Calo, Victor M.

    2016-01-01

    We present PetIGA, a code framework to approximate the solution of partial differential equations using isogeometric analysis. PetIGA can be used to assemble matrices and vectors which come from a Galerkin weak form, discretized with Non-Uniform Rational B-spline basis functions. We base our framework on PETSc, a high-performance library for the scalable solution of partial differential equations, which simplifies the development of large-scale scientific codes, provides a rich environment for prototyping, and separates parallelism from algorithm choice. We describe the implementation of PetIGA, and exemplify its use by solving a model nonlinear problem. To illustrate the robustness and flexibility of PetIGA, we solve some challenging nonlinear partial differential equations that include problems in both solid and fluid mechanics. We show strong scaling results on up to 40964096 cores, which confirm the suitability of PetIGA for large scale simulations.

  10. PetIGA: A framework for high-performance isogeometric analysis

    KAUST Repository

    Dalcin, L.

    2016-05-25

    We present PetIGA, a code framework to approximate the solution of partial differential equations using isogeometric analysis. PetIGA can be used to assemble matrices and vectors which come from a Galerkin weak form, discretized with Non-Uniform Rational B-spline basis functions. We base our framework on PETSc, a high-performance library for the scalable solution of partial differential equations, which simplifies the development of large-scale scientific codes, provides a rich environment for prototyping, and separates parallelism from algorithm choice. We describe the implementation of PetIGA, and exemplify its use by solving a model nonlinear problem. To illustrate the robustness and flexibility of PetIGA, we solve some challenging nonlinear partial differential equations that include problems in both solid and fluid mechanics. We show strong scaling results on up to 40964096 cores, which confirm the suitability of PetIGA for large scale simulations.

  11. Test-based age-of-acquisition norms for 44 thousand English word meanings.

    Science.gov (United States)

    Brysbaert, Marc; Biemiller, Andrew

    2017-08-01

    Age of acquisition (AoA) is an important variable in word recognition research. Up to now, nearly all psychology researchers examining the AoA effect have used ratings obtained from adult participants. An alternative basis for determining AoA is directly testing children's knowledge of word meanings at various ages. In educational research, scholars and teachers have tried to establish the grade at which particular words should be taught by examining the ages at which children know various word meanings. Such a list is available from Dale and O'Rourke's (1981) Living Word Vocabulary for nearly 44 thousand meanings coming from over 31 thousand unique word forms and multiword expressions. The present article relates these test-based AoA estimates to lexical decision times as well as to AoA adult ratings, and reports strong correlations between all of the measures. Therefore, test-based estimates of AoA can be used as an alternative measure.

  12. Performance correlations for high temperature potassium heat pipes

    International Nuclear Information System (INIS)

    Merrigan, M.A.; Keddy, E.S.; Sena, J.T.

    1987-01-01

    Potassium heat pipes designed for operation at a nominal temperature of 775K have been developed for use in a heat pipe cooled reactor design. The heat pipes operate in a gravity assist mode with a maximum required power throughput of approximately 16 kW per heat pipe. Based on a series of sub-scale experiments with 2.12 and 3.2 cm diameter heat pipes the prototypic heat pipe diameter was set at 5.7 cm with a simple knurled wall wick used in the interests of mechanical simplicity. The performance levels required for this design had been demonstrated in prior work with gutter assisted wicks and emphasis in the present work was on the attainment of similar performance with a simplified wick structure. The wick structure used in the experiment consisted of a pattern of knurled grooves in the internal wall of the heat pipe. The knurl depth required for the planned heat pipe performance was determined by scaling of wick characteristic data from the sub-scale tests. These tests indicated that the maximum performance limits of the test heat pipes did not follow normal entrainment limit predictions for textured wall gravity assist heat pipes. Test data was therefore scaled to the prototype design based on the assumption that the performance was controlled by an entrainment parameter based on the liquid flow depth in the groove structure. This correlation provided a reasonable fit to the sub-scale test data and was used in scale up of the design from the 8.0 cm 2 cross section of the largest sub-scale heat pipe to the 25.5 cm 2 cross section prototype. Correlation of the model predictions with test data from the prototype is discussed

  13. High-Temperature Structural Analysis of a Small-Scale PHE Prototype under the Test Condition of a Small-Scale Gas Loop

    International Nuclear Information System (INIS)

    Song, K.; Hong, S.; Park, H.

    2012-01-01

    A process heat exchanger (PHE) is a key component for transferring the high-temperature heat generated from a very high-temperature reactor (VHTR) to a chemical reaction for the massive production of hydrogen. The Korea Atomic Energy Research Institute has designed and assembled a small-scale nitrogen gas loop for a performance test on VHTR components and has manufactured a small-scale PHE prototype made of Hastelloy-X alloy. A performance test on the PHE prototype is underway in the gas loop, where different kinds of pipelines connecting to the PHE prototype are tested for reducing the thermal stress under the expansion of the PHE prototype. In this study, to evaluate the high-temperature structural integrity of the PHE prototype under the test condition of the gas loop, a realistic and effective boundary condition imposing the stiffness of the pipelines connected to the PHE prototype was suggested. An equivalent spring stiffness to reduce the thermal stress under the expansion of the PHE prototype was computed from the bending deformation and expansion of the pipelines connected to the PHE. A structural analysis on the PHE prototype was also carried out by imposing the suggested boundary condition. As a result of the analysis, the structural integrity of the PHE prototype seems to be maintained under the test condition of the gas loop.

  14. Orbiting binary black hole evolutions with a multipatch high order finite-difference approach

    International Nuclear Information System (INIS)

    Pazos, Enrique; Tiglio, Manuel; Duez, Matthew D.; Kidder, Lawrence E.; Teukolsky, Saul A.

    2009-01-01

    We present numerical simulations of orbiting black holes for around 12 cycles, using a high order multipatch approach. Unlike some other approaches, the computational speed scales almost perfectly for thousands of processors. Multipatch methods are an alternative to adaptive mesh refinement, with benefits of simplicity and better scaling for improving the resolution in the wave zone. The results presented here pave the way for multipatch evolutions of black hole-neutron star and neutron star-neutron star binaries, where high resolution grids are needed to resolve details of the matter flow.

  15. TAU: a design for a thousand astronomical unit voyage

    International Nuclear Information System (INIS)

    Eubanks, D.; Alvis, J.; Bechler, E.; Lyon, W. III; McFarlane, D.; Palmrose, D.; Schmitz, P.

    1987-01-01

    The Jet Propulsion Lab. (JPL) has proposed a deep-space probe to travel to a distance of one thousand astronomical units -25 times further from the Sun than Pluto. In order to achieve this goal within the lifetime of the investigators, the mission time is set at a maximum of 50 yr. The JPL proposal postulates a design in which the probe is under powered thrust for the first 10 yr of the mission and coasts for the next 40 yr. A continuous high specific impulse, Isp (the ratio of thrust to propellant mass flow rate), low thrust propulsion system (either magnetoplasmadynamic (MPD) or ion thrusters) is required in order to achieve this goal. This in turn necessitates electrical power in the megawatt range. The only power source that is practical for this situation is a nuclear reactor. It was a this point that the Nuclear Engineering Dept. at Texas A and M Univ. began its ongoing work, looking into several areas of the proposal in which a more detailed description was needed. These areas of interest were power, propulsion, heavy lift launch capabilities, and trajectory analysis. In addition to all of the boundaries previously outlined, the technology level is assumed to be that of 1995, 8 yr from now

  16. Multi-megawatt wind-power installations call for new, high-performance solutions

    International Nuclear Information System (INIS)

    2004-01-01

    This article discusses the development of increasingly powerful and profitable wind-energy installations for off-shore, on-shore and refurbishment sites. In particular, the rapid development of megawatt-class units is discussed. The latest products of various companies with rotor diameters of up to 120 metres and with power ratings of up to 5 MW are looked at and commented on. The innovations needed for the reduction of weight and the extreme demands placed on gearing systems are discussed. Also, the growing markets for wind energy installations in Europe and the United States are discussed and plans for new off-shore wind parks are looked at

  17. A new scaling methodology for NO(x) emissions performance of gas burners and furnaces

    Science.gov (United States)

    Hsieh, Tse-Chih

    1997-11-01

    A general burner and furnace scaling methodology is presented, together with the resulting scaling model for NOsb{x} emissions performance of a broad class of swirl-stabilized industrial gas burners. The model is based on results from a set of novel burner scaling experiments on a generic gas burner and furnace design at five different scales having near-uniform geometric, aerodynamic, and thermal similarity and uniform measurement protocols. These provide the first NOsb{x} scaling data over the range of thermal scales from 30 kW to 12 MW, including input-output measurements as well as detailed in-flame measurements of NO, NOsb{x}, CO, Osb2, unburned hydrocarbons, temperature, and velocities at each scale. The in-flame measurements allow identification of key sources of NOsb{x} production. The underlying physics of these NOsb{x} sources lead to scaling laws for their respective contributions to the overall NOsb{x} emissions performance. It is found that the relative importance of each source depends on the burner scale and operating conditions. Simple furnace residence time scaling is shown to be largely irrelevant, with NOsb{x} emissions instead being largely controlled by scaling of the near-burner region. The scalings for these NOsb{x} sources are combined in a comprehensive scaling model for NOsb{x} emission performance. Results from the scaling model show good agreement with experimental data at all burner scales and over the entire range of turndown, staging, preheat, and excess air dilution, with correlations generally exceeding 90%. The scaling model permits design trade-off assessments for a broad class of burners and furnaces, and allows performance of full industrial scale burners and furnaces of this type to be inferred from results of small scale tests.

  18. High-Intensity Focused Ultrasound Treatment for Advanced Pancreatic Cancer

    Directory of Open Access Journals (Sweden)

    Yufeng Zhou

    2014-01-01

    Full Text Available Pancreatic cancer is under high mortality but has few effective treatment modalities. High-intensity focused ultrasound (HIFU is becoming an emerging approach of noninvasively ablating solid tumor in clinics. A variety of solid tumors have been tried on thousands of patients in the last fifteen years with great success. The principle, mechanism, and clinical outcome of HIFU were introduced first. All 3022 clinical cases of HIFU treatment for the advanced pancreatic cancer alone or in combination with chemotherapy or radiotherapy in 241 published papers were reviewed and summarized for its efficacy, pain relief, clinical benefit rate, survival, Karnofsky performance scale (KPS score, changes in tumor size, occurrence of echogenicity, serum level, diagnostic assessment of outcome, and associated complications. Immune response induced by HIFU ablation may become an effective way of cancer treatment. Comments for a better outcome and current challenges of HIFU technology are also covered.

  19. Generalized Nonlinear Chirp Scaling Algorithm for High-Resolution Highly Squint SAR Imaging.

    Science.gov (United States)

    Yi, Tianzhu; He, Zhihua; He, Feng; Dong, Zhen; Wu, Manqing

    2017-11-07

    This paper presents a modified approach for high-resolution, highly squint synthetic aperture radar (SAR) data processing. Several nonlinear chirp scaling (NLCS) algorithms have been proposed to solve the azimuth variance of the frequency modulation rates that are caused by the linear range walk correction (LRWC). However, the azimuth depth of focusing (ADOF) is not handled well by these algorithms. The generalized nonlinear chirp scaling (GNLCS) algorithm that is proposed in this paper uses the method of series reverse (MSR) to improve the ADOF and focusing precision. It also introduces a high order processing kernel to avoid the range block processing. Simulation results show that the GNLCS algorithm can enlarge the ADOF and focusing precision for high-resolution highly squint SAR data.

  20. Generalized Nonlinear Chirp Scaling Algorithm for High-Resolution Highly Squint SAR Imaging

    Directory of Open Access Journals (Sweden)

    Tianzhu Yi

    2017-11-01

    Full Text Available This paper presents a modified approach for high-resolution, highly squint synthetic aperture radar (SAR data processing. Several nonlinear chirp scaling (NLCS algorithms have been proposed to solve the azimuth variance of the frequency modulation rates that are caused by the linear range walk correction (LRWC. However, the azimuth depth of focusing (ADOF is not handled well by these algorithms. The generalized nonlinear chirp scaling (GNLCS algorithm that is proposed in this paper uses the method of series reverse (MSR to improve the ADOF and focusing precision. It also introduces a high order processing kernel to avoid the range block processing. Simulation results show that the GNLCS algorithm can enlarge the ADOF and focusing precision for high-resolution highly squint SAR data.

  1. Impact of scaling on the performance and reliability degradation of metal-contacts in NEMS devices

    KAUST Repository

    Dadgour, Hamed F.

    2011-04-01

    Nano-electro-mechanical switches (NEMS) offer new possibilities for the design of ultra energy-efficient systems; however, thus far, all the fabricated NEMS devices require high supply voltages that limit their applicability for logic designs. Therefore, research is being conducted to lower the operating voltages by scaling down the physical dimensions of these devices. However, the impact of device scaling on the electrical and mechanical properties of metal contacts in NEMS devices has not been thoroughly investigated in the literature. Such a study is essential because metal contacts play a critical role in determining the overall performance and reliability of NEMS. Therefore, the comprehensive analytical study presented in this paper highlights the performance and reliability degradations of such metal contacts caused by scaling. The proposed modeling environment accurately takes into account the impact of roughness of contact surfaces, elastic/plastic deformation of contacting asperities, and various inter-molecular forces between mating surfaces (such as Van der Waals and capillary forces). The modeling results are validated and calibrated using available measurement data. This scaling analysis indicates that the key contact properties of gold contacts (resistance, stiction and wear-out) deteriorate "exponentially" with scaling. Simulation results demonstrate that reliable (stiction-free) operation of very small contact areas (≈ 6nm x 6nm) will be a daunting task due to the existence of strong surface forces. Hence, contact degradation is identified as a major problem to the scaling of NEMS transistors. © 2011 IEEE.

  2. Application of high-throughput mini-bioreactor system for systematic scale-down modeling, process characterization, and control strategy development.

    Science.gov (United States)

    Janakiraman, Vijay; Kwiatkowski, Chris; Kshirsagar, Rashmi; Ryll, Thomas; Huang, Yao-Ming

    2015-01-01

    High-throughput systems and processes have typically been targeted for process development and optimization in the bioprocessing industry. For process characterization, bench scale bioreactors have been the system of choice. Due to the need for performing different process conditions for multiple process parameters, the process characterization studies typically span several months and are considered time and resource intensive. In this study, we have shown the application of a high-throughput mini-bioreactor system viz. the Advanced Microscale Bioreactor (ambr15(TM) ), to perform process characterization in less than a month and develop an input control strategy. As a pre-requisite to process characterization, a scale-down model was first developed in the ambr system (15 mL) using statistical multivariate analysis techniques that showed comparability with both manufacturing scale (15,000 L) and bench scale (5 L). Volumetric sparge rates were matched between ambr and manufacturing scale, and the ambr process matched the pCO2 profiles as well as several other process and product quality parameters. The scale-down model was used to perform the process characterization DoE study and product quality results were generated. Upon comparison with DoE data from the bench scale bioreactors, similar effects of process parameters on process yield and product quality were identified between the two systems. We used the ambr data for setting action limits for the critical controlled parameters (CCPs), which were comparable to those from bench scale bioreactor data. In other words, the current work shows that the ambr15(TM) system is capable of replacing the bench scale bioreactor system for routine process development and process characterization. © 2015 American Institute of Chemical Engineers.

  3. CERN Computing Colloquia Spring Series: IT Security - A High-Performance Pattern Matching Engine for Intrusion Detection

    CERN Multimedia

    CERN. Geneva

    2006-01-01

    The flexible and modular design of the engine allows a broad spectrum of applications, ranging from high-end enterprise level network devices that need to match hundreds of thousands of patterns at speeds of tens of gigabits per second, to low-end dev...

  4. A Hybrid Testbed for Performance Evaluation of Large-Scale Datacenter Networks

    DEFF Research Database (Denmark)

    Pilimon, Artur; Ruepp, Sarah Renée

    2018-01-01

    Datacenters (DC) as well as their network interconnects are growing in scale and complexity. They are constantly being challenged in terms of energy and resource utilization efficiency, scalability, availability, reliability and performance requirements. Therefore, these resource-intensive enviro......Datacenters (DC) as well as their network interconnects are growing in scale and complexity. They are constantly being challenged in terms of energy and resource utilization efficiency, scalability, availability, reliability and performance requirements. Therefore, these resource......-intensive environments must be properly tested and analyzed in order to make timely upgrades and transformations. However, a limited number of academic institutions and Research and Development companies have access to production scale DC Network (DCN) testing facilities, and resource-limited studies can produce...... misleading or inaccurate results. To address this problem, we introduce an alternative solution, which forms a solid base for a more realistic and comprehensive performance evaluation of different aspects of DCNs. It is based on the System-in-the-loop (SITL) concept, where real commercial DCN equipment...

  5. MetrIntSimil—An Accurate and Robust Metric for Comparison of Similarity in Intelligence of Any Number of Cooperative Multiagent Systems

    Directory of Open Access Journals (Sweden)

    Laszlo Barna Iantovics

    2018-02-01

    Full Text Available Intelligent cooperative multiagent systems are applied for solving a large range of real-life problems, including in domains like biology and healthcare. There are very few metrics able to make an effective measure of the machine intelligence quotient. The most important drawbacks of the designed metrics presented in the scientific literature consist in the limitation in universality, accuracy, and robustness. In this paper, we propose a novel universal metric called MetrIntSimil capable of making an accurate and robust symmetric comparison of the similarity in intelligence of any number of cooperative multiagent systems specialized in difficult problem solving. The universality is an important necessary property based on the large variety of designed intelligent systems. MetrIntSimil makes a comparison by taking into consideration the variability in intelligence in the problem solving of the compared cooperative multiagent systems. It allows a classification of the cooperative multiagent systems based on their similarity in intelligence. A cooperative multiagent system has variability in the problem solving intelligence, and it can manifest lower or higher intelligence in different problem solving tasks. More cooperative multiagent systems with similar intelligence can be included in the same class. For the evaluation of the proposed metric, we conducted a case study for more intelligent cooperative multiagent systems composed of simple computing agents applied for solving the Symmetric Travelling Salesman Problem (STSP that is a class of NP-hard problems. STSP is the problem of finding the shortest Hamiltonian cycle/tour in a weighted undirected graph that does not have loops or multiple edges. The distance between two cities is the same in each opposite direction. Two classes of similar intelligence denoted IntClassA and IntClassB were identified. The experimental results show that the agent belonging to IntClassA intelligence class is less

  6. Estuarine River Data for the Ten Thousand Islands Area, Florida, Water Year 2005

    Science.gov (United States)

    Byrne, Michael J.; Patino, Eduardo

    2008-01-01

    The U.S. Geological Survey collected stream discharge, stage, salinity, and water-temperature data near the mouths of 11 tributaries flowing into the Ten Thousand Islands area of Florida from October 2004 to June 2005. Maximum positive discharge from Barron River and Faka Union River was 6,000 and 3,200 ft3/s, respectively; no other tributary exceeded 2,600 ft3/s. Salinity variation was greatest at Barron River and Faka Union River, ranging from 2 to 37 ppt, and from 3 to 34 ppt, respectively. Salinity maximums were greatest at Wood River and Little Wood River, each exceeding 40 ppt. All data were collected prior to the commencement of the Picayune Strand Restoration Project, which is designed to establish a more natural flow regime to the tributaries of the Ten Thousand Islands area.

  7. Dynamic Performance Characteristic Tests of Real Scale Lead Rubber Bearing for the Evaluation of Performance Criteria

    International Nuclear Information System (INIS)

    Kim, Min Kyu; Kim, Jung-Han; Choi, In-Kil

    2014-01-01

    Dynamic characteristic tests of full scale lead rubber bearing were performed for the evaluation of performance criteria of isolation system for nuclear power plants. For the dynamic test for a full scale rubber bearing, two 1500mm diameter lead rubber bearings were manufactured. The viewpoints of this dynamic test are determination of an ultimate shear strain level of lead rubber bearing, behavior of rubber bearing according to static and dynamic input motion, sinusoidal and random (earthquake) motion, and 1-dimentional and 2-dimensional input motion. In this study, seismic isolation device tests were performed for the evaluation of performance criteria of isolation system. Through this test, it can be recognized that in the case of considering a mechanical property test, dynamic and multi degree of loading conditions should be determined. But these differences should be examined how much affect to the global structural behavior

  8. A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations

    International Nuclear Information System (INIS)

    Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel; Buluc, Aydin; Shao, Meiyue

    2017-01-01

    As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using the compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.

  9. High Performance Marine Vessels

    CERN Document Server

    Yun, Liang

    2012-01-01

    High Performance Marine Vessels (HPMVs) range from the Fast Ferries to the latest high speed Navy Craft, including competition power boats and hydroplanes, hydrofoils, hovercraft, catamarans and other multi-hull craft. High Performance Marine Vessels covers the main concepts of HPMVs and discusses historical background, design features, services that have been successful and not so successful, and some sample data of the range of HPMVs to date. Included is a comparison of all HPMVs craft and the differences between them and descriptions of performance (hydrodynamics and aerodynamics). Readers will find a comprehensive overview of the design, development and building of HPMVs. In summary, this book: Focuses on technology at the aero-marine interface Covers the full range of high performance marine vessel concepts Explains the historical development of various HPMVs Discusses ferries, racing and pleasure craft, as well as utility and military missions High Performance Marine Vessels is an ideal book for student...

  10. Bayesian Multiresolution Variable Selection for Ultra-High Dimensional Neuroimaging Data.

    Science.gov (United States)

    Zhao, Yize; Kang, Jian; Long, Qi

    2018-01-01

    Ultra-high dimensional variable selection has become increasingly important in analysis of neuroimaging data. For example, in the Autism Brain Imaging Data Exchange (ABIDE) study, neuroscientists are interested in identifying important biomarkers for early detection of the autism spectrum disorder (ASD) using high resolution brain images that include hundreds of thousands voxels. However, most existing methods are not feasible for solving this problem due to their extensive computational costs. In this work, we propose a novel multiresolution variable selection procedure under a Bayesian probit regression framework. It recursively uses posterior samples for coarser-scale variable selection to guide the posterior inference on finer-scale variable selection, leading to very efficient Markov chain Monte Carlo (MCMC) algorithms. The proposed algorithms are computationally feasible for ultra-high dimensional data. Also, our model incorporates two levels of structural information into variable selection using Ising priors: the spatial dependence between voxels and the functional connectivity between anatomical brain regions. Applied to the resting state functional magnetic resonance imaging (R-fMRI) data in the ABIDE study, our methods identify voxel-level imaging biomarkers highly predictive of the ASD, which are biologically meaningful and interpretable. Extensive simulations also show that our methods achieve better performance in variable selection compared to existing methods.

  11. Adaptation and learning: characteristic time scales of performance dynamics.

    Science.gov (United States)

    Newell, Karl M; Mayer-Kress, Gottfried; Hong, S Lee; Liu, Yeou-Teh

    2009-12-01

    A multiple time scales landscape model is presented that reveals structures of performance dynamics that were not resolved in the traditional power law analysis of motor learning. It shows the co-existence of separate processes during and between practice sessions that evolve in two independent dimensions characterized by time scales that differ by about an order of magnitude. Performance along the slow persistent dimension of learning improves often as much and sometimes more during rest (memory consolidation and/or insight generation processes) than during a practice session itself. In contrast, the process characterized by the fast, transient dimension of adaptation reverses direction between practice sessions, thereby significantly degrading performance at the beginning of the next practice session (warm-up decrement). The theoretical model fits qualitatively and quantitatively the data from Snoddy's [Snoddy, G. S. (1926). Learning and stability. Journal of Applied Psychology, 10, 1-36] classic learning study of mirror tracing and other averaged and individual data sets, and provides a new account of the processes of change in adaptation and learning. 2009 Elsevier B.V. All rights reserved.

  12. APEnet+: high bandwidth 3D torus direct network for petaflops scale commodity clusters

    International Nuclear Information System (INIS)

    Ammendola, R; Salamon, A; Salina, G; Biagioni, A; Prezza, O; Cicero, F Lo; Lonardo, A; Paolucci, P S; Rossetti, D; Tosoratto, L; Vicini, P; Simula, F

    2011-01-01

    We describe herein the APElink+ board, a PCIe interconnect adapter featuring the latest advances in wire speed and interface technology plus hardware support for a RDMA programming model and experimental acceleration of GPU networking; this design allows us to build a low latency, high bandwidth PC cluster, the APEnet+ network, the new generation of our cost-effective, tens-of-thousands-scalable cluster network architecture. Some test results and characterization of data transmission of a complete testbench, based on a commercial development card mounting an Altera ® FPGA, are provided.

  13. APEnet+: high bandwidth 3D torus direct network for petaflops scale commodity clusters

    Energy Technology Data Exchange (ETDEWEB)

    Ammendola, R; Salamon, A; Salina, G [INFN Tor Vergata, Roma (Italy); Biagioni, A; Prezza, O; Cicero, F Lo; Lonardo, A; Paolucci, P S; Rossetti, D; Tosoratto, L; Vicini, P [INFN Roma, Roma (Italy); Simula, F [Sapienza Universita di Roma, Roma (Italy)

    2011-12-23

    We describe herein the APElink+ board, a PCIe interconnect adapter featuring the latest advances in wire speed and interface technology plus hardware support for a RDMA programming model and experimental acceleration of GPU networking; this design allows us to build a low latency, high bandwidth PC cluster, the APEnet+ network, the new generation of our cost-effective, tens-of-thousands-scalable cluster network architecture. Some test results and characterization of data transmission of a complete testbench, based on a commercial development card mounting an Altera{sup Registered-Sign} FPGA, are provided.

  14. Utilizing High-Performance Computing to Investigate Parameter Sensitivity of an Inversion Model for Vadose Zone Flow and Transport

    Science.gov (United States)

    Fang, Z.; Ward, A. L.; Fang, Y.; Yabusaki, S.

    2011-12-01

    High-resolution geologic models have proven effective in improving the accuracy of subsurface flow and transport predictions. However, many of the parameters in subsurface flow and transport models cannot be determined directly at the scale of interest and must be estimated through inverse modeling. A major challenge, particularly in vadose zone flow and transport, is the inversion of the highly-nonlinear, high-dimensional problem as current methods are not readily scalable for large-scale, multi-process models. In this paper we describe the implementation of a fully automated approach for addressing complex parameter optimization and sensitivity issues on massively parallel multi- and many-core systems. The approach is based on the integration of PNNL's extreme scale Subsurface Transport Over Multiple Phases (eSTOMP) simulator, which uses the Global Array toolkit, with the Beowulf-Cluster inspired parallel nonlinear parameter estimation software, BeoPEST in the MPI mode. In the eSTOMP/BeoPEST implementation, a pre-processor generates all of the PEST input files based on the eSTOMP input file. Simulation results for comparison with observations are extracted automatically at each time step eliminating the need for post-process data extractions. The inversion framework was tested with three different experimental data sets: one-dimensional water flow at Hanford Grass Site; irrigation and infiltration experiment at the Andelfingen Site; and a three-dimensional injection experiment at Hanford's Sisson and Lu Site. Good agreements are achieved in all three applications between observations and simulations in both parameter estimates and water dynamics reproduction. Results show that eSTOMP/BeoPEST approach is highly scalable and can be run efficiently with hundreds or thousands of processors. BeoPEST is fault tolerant and new nodes can be dynamically added and removed. A major advantage of this approach is the ability to use high-resolution geologic models to preserve

  15. Del funcionalismo industrial al de servicios: ¿la nueva utopía de la metrópoli postindustrial del valle de México?

    Directory of Open Access Journals (Sweden)

    Blanca Ramírez

    2006-05-01

    Full Text Available El objetivo de este ensayo es responder algunas preguntas que han sido parte de viejas y nuevas reflexiones sobre la Metrópoli del Valle de México, y descubrir las nuevas tendencias que percibimos en su desarrollo. Partimos de suponer que hay confusión en la forma como se le nombra y que la visión de futuro sobre su transformación en el mediano y largo plazo pasa por un cambio importante de la función industrializadora que el modelo de sustitución de importaciones impuso a la ciudad, por otro de servicios y patrimonialista que le impone la visión postindustrial en que se ve inmersa en la actualidad. En esta transformación resalta la importancia de la periferia, dada la ubicación privilegiada que tiene en cuanto al patrimonio natural y cultural que le es propio, posibilitando así su contribución para alcanzar la sustentabilidad de la metrópoliOld and new reflections regarding Mexico Valley’s Metropolis are questioned, in order to discover new trends in urban development of the city. We assume the way in which we name the metropolis generates confusion. At the same time, there is an important transformation in the activities leading the economic development of the city. Services are taking the place that industrialization took for more than seventy years, currently adopting the leadership in the vision of the future of the postindustrial city. At the same time, natural and historical patrimonies are nowadays the main resources in order to revalue economic activities. Special mention has to be done to urban periphery, the place where growth and resources allow planning the sustainable city of the future

  16. Array-scale performance of TES X-ray Calorimeters Suitable for Constellation-X

    Science.gov (United States)

    Kilbourne, C. A.; Bandler, S. R.; Brown, A. D.; Chervenak, J. A.; Eckart, M. E.; Finkbeiner, F. M.; Iyomoto, N.; Kelley, R. L.; Porter, F. S.; Smith, S. J.; hide

    2008-01-01

    Having developed a transition-edge-sensor (TES) calorimeter design that enables high spectral resolution in high fill-factor arrays, we now present array-scale results from 32-pixel arrays of identical closely packed TES pixels. Each pixel in such an array contains a Mo/Au bilayer with a transition temperature of 0.1 K and an electroplated Au or Au/Bi xray absorber. The pixels in an array have highly uniform physical characteristics and performance. The arrays are easy to operate due to the range of bias voltages and heatsink temperatures over which solution better than 3 eV at 6 keV can be obtained. Resolution better than 3 eV has also been obtained with 2x8 time-division SQUID multiplexing. We will present the detector characteristics and show spectra acquired through the read-out chain from the multiplexer electronics through the demultiplexer software to real-time signal processing. We are working towards demonstrating this performance over the range of count rates expected in the observing program of the Constellation-X observatory. We mill discuss the impact of increased counting rate on spectral resolution, including the effects of crosstalk and optimal-filtering dead time.

  17. Comparison of the large-scale radon risk map for southern Belgium with results of high resolution surveys

    International Nuclear Information System (INIS)

    Zhu, H.-C.; Charlet, J.M.; Poffijn, A.

    2000-01-01

    A large-scale radon survey consisting of long-term measurements in about 5200 singe-family houses in the southern part of Belgium was carried from 1995 to 1999. A radon risk map for the region was produced using geostatistical and GIS approaches. Some communes or villages situated within high risk areas were chosen for detailed surveys. A high resolution radon survey with about 330 measurements was performed in half part of the commune of Burg-Reuland. Comparison of radon maps on quite different scales shows that the general Rn risk map has similar pattern as the radon map for the detailed study area. Another detailed radon survey in the village of Hatrival, situated in a high radon area, found very high proportion of houses with elevated radon concentrations. The results of this detailed survey are comparable to the expectation for high risk areas on the large-scale radon risk map. The good correspondence between the findings of the general risk map and the analysis of the limited detailed surveys, suggests that the large-scale radon risk map is likely reliable. (author)

  18. Cluman: Advanced cluster management for the large-scale infrastructures

    International Nuclear Information System (INIS)

    Babik, Marian; Fedorko, Ivan; Rodrigues, David

    2011-01-01

    The recent uptake of multi-core computing has produced a rapid growth of virtualisation and cloud computing services. With the increased use of the many-core processors this trend will likely accelerate and computing centres will be faced with the management of the tens of thousands of the virtual machines. Furthermore, these machines will likely be geographically distributed and need to be allocated on demand. In order to cope with such complexity we have designed and developed an advanced cluster management system that can execute administrative tasks targeting thousands of machines as well as provide an interactive high-density visualisation of the fabrics. The job management subsystem can perform complex tasks while following their progress and output and report aggregated information back to the system administrators. The visualisation subsystem can display tree maps of the infrastructure elements with data and monitoring information, thus providing a very detailed overview of the large clusters at a glance. The initial experience with development and testing of the system will be presented as well as an evaluation of its performance.

  19. Optimizing fusion PIC code performance at scale on Cori Phase 2

    Energy Technology Data Exchange (ETDEWEB)

    Koskela, T. S.; Deslippe, J.

    2017-07-23

    In this paper we present the results of optimizing the performance of the gyrokinetic full-f fusion PIC code XGC1 on the Cori Phase Two Knights Landing system. The code has undergone substantial development to enable the use of vector instructions in its most expensive kernels within the NERSC Exascale Science Applications Program. We study the single-node performance of the code on an absolute scale using the roofline methodology to guide optimization efforts. We have obtained 2x speedups in single node performance due to enabling vectorization and performing memory layout optimizations. On multiple nodes, the code is shown to scale well up to 4000 nodes, near half the size of the machine. We discuss some communication bottlenecks that were identified and resolved during the work.

  20. Fiftieth Anniversary at the summit : neither fear of heights nor the cold succeeded in cooling the ardour of four brave climbers from CERN who celebrated CERN's 50th Anniversary at the summit of Mount Kilimanjaro (5,895 metres).

    CERN Multimedia

    2005-01-01

    On the way back from the summit, Miguel Cerqueira Bastos (AB/PO), David Collados Polidura (IT/GM), Sandra Sequeira Tavares (PH/CMI) and Daniel Cano Ott (n_TOF) raised the official CERN Jubilee flag at 4750 metres altitude.

  1. Measured Boundary Layer Transition and Rotor Hover Performance at Model Scale

    Science.gov (United States)

    Overmeyer, Austin D.; Martin, Preston B.

    2017-01-01

    An experiment involving a Mach-scaled, 11:08 f t: diameter rotor was performed in hover during the summer of 2016 at NASA Langley Research Center. The experiment investigated the hover performance as a function of the laminar to turbulent transition state of the boundary layer, including both natural and fixed transition cases. The boundary layer transition locations were measured on both the upper and lower aerodynamic surfaces simultaneously. The measurements were enabled by recent advances in infrared sensor sensitivity and stability. The infrared thermography measurement technique was enhanced by a paintable blade surface heater, as well as a new high-sensitivity long wave infrared camera. The measured transition locations showed extensive amounts, x=c>0:90, of laminar flow on the lower surface at moderate to high thrust (CT=s > 0:068) for the full blade radius. The upper surface showed large amounts, x=c > 0:50, of laminar flow at the blade tip for low thrust (CT=s boundary layer transition models in CFD and rotor design tools. The data is expected to be used as part of the AIAA Rotorcraft SimulationWorking Group

  2. Scaling gysela code beyond 32K-cores on bluegene/Q***

    Directory of Open Access Journals (Sweden)

    Bigot J.

    2013-12-01

    Full Text Available Gyrokinetic simulations lead to huge computational needs. Up to now, the semi- Lagrangian code Gysela performed large simulations using a few thousands cores (8k cores typically. Simulation with finer resolutions and with kinetic electrons are expected to increase those needs by a huge factor, providing a good example of applications requiring Exascale machines. This paper presents our work to improve Gysela in order to target an architecture that presents one possible way towards Exascale: the Blue Gene/Q. After analyzing the limitations of the code on this architecture, we have implemented three kinds of improvement: computational performance improvements, memory consumption improvements and disk i/o improvements. As a result, we show that the code now scales beyond 32k cores with much improved performances. This will make it possible to target the most powerful machines available and thus handle much larger physical cases.

  3. Development of an Attitude Scale towards High School Physics Lessons

    Science.gov (United States)

    Yavas, Pervin Ünlü; Çagan, Sultan

    2017-01-01

    The aim of this study was to develop a Likert type attitude scale for high school students with regard to high school physics lessons. The research was carried out with high school students who were studying in Ankara. First, the opinions of 105 high school students about physics lessons were obtained and then 55 scale items were determined from…

  4. Behind the scenes of HALO, a large-scale art installation conceived at CERN and inspired by ATLAS data will be exhibited during Art Basel

    CERN Multimedia

    marcelloni, claudia

    2018-01-01

    A large-scale immersive art installation entitled HALO is the artistic interpretation of the Large Hadron Collider’s ATLAS experiment and celebrates the links between art, science and technology. Inspired by raw data generated by ATLAS, the artwork has been conceived and executed by CERN’s former artists-in-residence, the “Semiconductor” duo Ruth Jarman and Joe Gerhardt, in collaboration with Mónica Bello, curator and head of Arts at CERN. The artwork is part of the 4th Audemars Piguet Art Commission. HALO is a cylindrical structure, measuring ten metres in diameter and surrounded by 4-metre-long vertical piano wires. On the inside, an enormous 360-degree screen creates an immersive visual experience. Using kaleidoscopic images of slowed-down particle collisions, which trigger piano wires to create sound, the experience takes the visitors into the realm of subatomic matter through the multiple patterns generated in the space. HALO is conceived as an experiential reworking of the ATLAS experiment. Its...

  5. Scaling HEP to Web size with RESTful protocols: The frontier example

    International Nuclear Information System (INIS)

    Dykstra, Dave

    2011-01-01

    The World-Wide-Web has scaled to an enormous size. The largest single contributor to its scalability is the HTTP protocol, particularly when used in conformity to REST (REpresentational State Transfer) principles. High Energy Physics (HEP) computing also has to scale to an enormous size, so it makes sense to base much of it on RESTful protocols. Frontier, which reads databases with an HTTP-based RESTful protocol, has successfully scaled to deliver production detector conditions data from both the CMS and ATLAS LHC detectors to hundreds of thousands of computer cores worldwide. Frontier is also able to re-use a large amount of standard software that runs the Web: on the clients, caches, and servers. I discuss the specific ways in which HTTP and REST enable high scalability for Frontier. I also briefly discuss another protocol used in HEP computing that is HTTP-based and RESTful, and another protocol that could benefit from it. My goal is to encourage HEP protocol designers to consider HTTP and REST whenever the same information is needed in many places.

  6. Development and Validation of a Rating Scale for Wind Jazz Improvisation Performance

    Science.gov (United States)

    Smith, Derek T.

    2009-01-01

    The purpose of this study was to construct and validate a rating scale for collegiate wind jazz improvisation performance. The 14-item Wind Jazz Improvisation Evaluation Scale (WJIES) was constructed and refined through a facet-rational approach to scale development. Five wind jazz students and one professional jazz educator were asked to record…

  7. A short history of wind power - from its early beginnings to today's installations and its business environment

    International Nuclear Information System (INIS)

    2005-01-01

    This article takes a look at how wind power has developed from its beginnings centuries ago with windmills over early installations in Denmark around 1900 through to the modern wind-parks providing many thousands of megawatts of wind power generated by 100-metre-high units with installed power ratings of up to 5 megawatts. The history of wind power is looked at from the simple windmill to the modern, industrially manufactured mass product. The expected growth of the wind-power market in the twenty-first century is discussed, as are the legal regulations governing their construction and use. Figures are also given on production capacities and installed power in various countries

  8. Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce.

    Science.gov (United States)

    Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel

    2013-08-01

    Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS - a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive.

  9. Development of combinatorial chemistry methods for coatings: high-throughput adhesion evaluation and scale-up of combinatorial leads.

    Science.gov (United States)

    Potyrailo, Radislav A; Chisholm, Bret J; Morris, William G; Cawse, James N; Flanagan, William P; Hassib, Lamyaa; Molaison, Chris A; Ezbiansky, Karin; Medford, George; Reitz, Hariklia

    2003-01-01

    Coupling of combinatorial chemistry methods with high-throughput (HT) performance testing and measurements of resulting properties has provided a powerful set of tools for the 10-fold accelerated discovery of new high-performance coating materials for automotive applications. Our approach replaces labor-intensive steps with automated systems for evaluation of adhesion of 8 x 6 arrays of coating elements that are discretely deposited on a single 9 x 12 cm plastic substrate. Performance of coatings is evaluated with respect to their resistance to adhesion loss, because this parameter is one of the primary considerations in end-use automotive applications. Our HT adhesion evaluation provides previously unavailable capabilities of high speed and reproducibility of testing by using a robotic automation, an expanded range of types of tested coatings by using the coating tagging strategy, and an improved quantitation by using high signal-to-noise automatic imaging. Upon testing, the coatings undergo changes that are impossible to quantitatively predict using existing knowledge. Using our HT methodology, we have developed several coatings leads. These HT screening results for the best coating compositions have been validated on the traditional scales of coating formulation and adhesion loss testing. These validation results have confirmed the superb performance of combinatorially developed coatings over conventional coatings on the traditional scale.

  10. Long-term superelastic cycling at nano-scale in Cu-Al-Ni shape memory alloy micropillars

    Energy Technology Data Exchange (ETDEWEB)

    San Juan, J., E-mail: jose.sanjuan@ehu.es; Gómez-Cortés, J. F. [Dpto. Física Materia Condensada, Facultad de Ciencia y Tecnología, Univ. del País Vasco UPV/EHU, Apdo. 644, 48080 Bilbao (Spain); López, G. A.; Nó, M. L. [Dpto. Física Aplicada II, Facultad de Ciencia y Tecnología, Univ. del País Vasco UPV/EHU, Apdo. 644, 48080 Bilbao (Spain); Jiao, C. [FEI, Achtseweg Noord 5, 5651 GG Eindhoven (Netherlands)

    2014-01-06

    Superelastic behavior at nano-scale has been studied along cycling in Cu-Al-Ni shape memory alloy micropillars. Arrays of square micropillars were produced by focused ion beam milling, on slides of [001] oriented Cu-Al-Ni single crystals. Superelastic behavior of micropillars, due to the stress-induced martensitic transformation, has been studied by nano-compression tests during thousand cycles, and its evolution has been followed along cycling. Each pillar has undergone more than thousand cycles without any detrimental evolution. Moreover, we demonstrate that after thousand cycles they exhibit a perfectly reproducible and completely recoverable superelastic behavior.

  11. System for high-voltage control detectors with large number photomultipliers

    International Nuclear Information System (INIS)

    Donskov, S.V.; Kachanov, V.A.; Mikhajlov, Yu.V.

    1985-01-01

    A simple and inexpensive on-line system for hihg-voltage control which is designed for detectors with a large number of photomultipliers is developed and manufactured. It has been developed for the GAMC type hodoscopic electromagnetic calorimeters, comprising up to 4 thousand photomultipliers. High voltage variation is performed by a high-speed potentiometer which is rotated by a microengine. Block-diagrams of computer control electronics are presented. The high-voltage control system has been used for five years in the IHEP and CERN accelerator experiments. The operation experience has shown that it is quite simple and convenient in operation. In case of about 6 thousand controlled channels in both experiments no potentiometer and microengines failures were observed

  12. High-Performance Cryogenic Designs for OMEGA and the National Ignition Facility

    Science.gov (United States)

    Goncharov, V. N.; Collins, T. J. B.; Marozas, J. A.; Regan, S. P.; Betti, R.; Boehly, T. R.; Campbell, E. M.; Froula, D. H.; Igumenshchev, I. V.; McCrory, R. L.; Myatt, J. F.; Radha, P. B.; Sangster, T. C.; Shvydky, A.

    2016-10-01

    The main advantage of laser symmetric direct drive (SDD) is a significantly higher coupled drive laser energy to the hot-spot internal energy at stagnation compared to that of laser indirect drive. Because of coupling losses resulting from cross-beam energy transfer (CBET), however, reaching ignition conditions on the NIF with SDD requires designs with excessively large in-flight aspect ratios ( 30). Results of cryogenic implosions performed on OMEGA show that such designs are unstable to short-scale nonuniformity growth during shell implosion. Several CBET reduction strategies have been proposed in the past. This talk will discuss high-performing designs using several CBET-mitigation techniques, including using drive laser beams smaller than the target size and wavelength detuning. Designs that are predicted to reach alpha burning regimes as well as a gain of 10 to 40 at the NIF-scale will be presented. Hydrodynamically scaled OMEGA designs with similar CBET-reduction techniques will also be discussed. This material is based upon work supported by the Department Of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  13. Performance limiting effects in X-band accelerators

    Directory of Open Access Journals (Sweden)

    Faya Wang

    2011-01-01

    Full Text Available Acceleration gradient is a critical parameter for the design of future TeV-scale linear colliders. The major obstacle to higher gradient in room-temperature accelerators is rf breakdown, which is still a very mysterious phenomenon that depends on the geometry and material of the accelerator as well as the input power and operating frequency. Pulsed heating has been associated with breakdown for many years; however, there have been no experiments that clearly separate field and heating effects on the breakdown rate. Recently, such experiments have been performed at SLAC with both standing-wave and traveling-wave structures. These experiments have demonstrated that pulsed heating is limiting the gradient. Nevertheless the X-band structures breakdown studies show damage to the iris surfaces in locations of high electric field rather than of high magnetic field after thousands of breakdowns. It is not yet clear how the relative roles of electric field, magnetic field, and heating factor into the damage caused by rf breakdown. Thus, a dual-moded cavity has been designed to better study the electric field, magnetic field, and pulsed heating effects on breakdown damage.

  14. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard; Allcock, William; Beggio, Chris; Campbell, Stuart; Cherry, Andrew; Cholia, Shreyas; Dart, Eli; England, Clay; Fahey, Tim; Foertter, Fernanda; Goldstone, Robin; Hick, Jason; Karelitz, David; Kelly, Kaki; Monroe, Laura; Prabhat,; Skinner, David; White, Julia

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at the DOE national laboratories. The report contains findings from that review.

  15. The effect of rater training on scoring performance and scale-specific expertise amongst occupational therapists participating in a multicentre study

    DEFF Research Database (Denmark)

    Hansen, Tina; Elholm Madsen, Esben; Sørensen, Annette

    2016-01-01

    Gill Ingestive Skills Assessment (MISA) they observe, interpret and record occupational performance of dysphagic clients participating in a meal. This is a highly complex task, which might introduce unwanted variability in measurement scores. A 2-day rater training programme was developed and this builds...... of the training on scoring performance and scale-specific expertise amongst raters. METHOD: During 2 days of rater training, 81 occupational therapists (OTs) were qualified to observe and score dysphagic clients' mealtime performance according to the criteria of 36 MISA-items. The training effects were evaluated...... deficient mealtime performance appeared most difficult to score. The OTs scale-specific expertise improved significantly (knowledge: Z = -7.857, p performance when using the Danish MISA as well as their perceived...

  16. Challenges in scaling NLO generators to leadership computers

    Science.gov (United States)

    Benjamin, D.; Childers, JT; Hoeche, S.; LeCompte, T.; Uram, T.

    2017-10-01

    Exascale computing resources are roughly a decade away and will be capable of 100 times more computing than current supercomputers. In the last year, Energy Frontier experiments crossed a milestone of 100 million core-hours used at the Argonne Leadership Computing Facility, Oak Ridge Leadership Computing Facility, and NERSC. The Fortran-based leading-order parton generator called Alpgen was successfully scaled to millions of threads to achieve this level of usage on Mira. Sherpa and MadGraph are next-to-leading order generators used heavily by LHC experiments for simulation. Integration times for high-multiplicity or rare processes can take a week or more on standard Grid machines, even using all 16-cores. We will describe our ongoing work to scale the Sherpa generator to thousands of threads on leadership-class machines and reduce run-times to less than a day. This work allows the experiments to leverage large-scale parallel supercomputers for event generation today, freeing tens of millions of grid hours for other work, and paving the way for future applications (simulation, reconstruction) on these and future supercomputers.

  17. Robust AIC with High Breakdown Scale Estimate

    Directory of Open Access Journals (Sweden)

    Shokrya Saleh

    2014-01-01

    Full Text Available Akaike Information Criterion (AIC based on least squares (LS regression minimizes the sum of the squared residuals; LS is sensitive to outlier observations. Alternative criterion, which is less sensitive to outlying observation, has been proposed; examples are robust AIC (RAIC, robust Mallows Cp (RCp, and robust Bayesian information criterion (RBIC. In this paper, we propose a robust AIC by replacing the scale estimate with a high breakdown point estimate of scale. The robustness of the proposed methods is studied through its influence function. We show that, the proposed robust AIC is effective in selecting accurate models in the presence of outliers and high leverage points, through simulated and real data examples.

  18. Development and performance evaluation of frustum cone shaped churn for small scale production of butter.

    Science.gov (United States)

    Kalla, Adarsh M; Sahu, C; Agrawal, A K; Bisen, P; Chavhan, B B; Sinha, Geetesh

    2016-05-01

    The present research was intended to develop a small scale butter churn and its performance by altering churning temperature and churn speed during butter making. In the present study, the cream was churned at different temperatures (8, 10 and 12 °C) and churn speeds (35, 60 and 85 rpm). The optimum parameters of churning time (40 min), moisture content (16 %) and overrun (19.42 %) were obtained when cream was churned at churning temperature of 10 °C and churn speed of 60 rpm. Using appropriate conditions of churning temperature and churn speed, high quality butter can be produced at cottage scale.

  19. High performance high-κ/metal gate complementary metal oxide semiconductor circuit element on flexible silicon

    KAUST Repository

    Sevilla, Galo T.

    2016-02-29

    Thinned silicon based complementary metal oxide semiconductor(CMOS)electronics can be physically flexible. To overcome challenges of limited thinning and damaging of devices originated from back grinding process, we show sequential reactive ion etching of silicon with the assistance from soft polymeric materials to efficiently achieve thinned (40 μm) and flexible (1.5 cm bending radius) silicon based functional CMOSinverters with high-κ/metal gate transistors. Notable advances through this study shows large area of silicon thinning with pre-fabricated high performance elements with ultra-large-scale-integration density (using 90 nm node technology) and then dicing of such large and thinned (seemingly fragile) pieces into smaller pieces using excimer laser. The impact of various mechanical bending and bending cycles show undeterred high performance of flexible siliconCMOSinverters. Future work will include transfer of diced silicon chips to destination site, interconnects, and packaging to obtain fully flexible electronic systems in CMOS compatible way.

  20. PERFORMANCE OF DIFFERENT CMOS LOGIC STYLES FOR LOW POWER AND HIGH SPEED

    OpenAIRE

    Sreenivasa Rao.Ijjada; Ayyanna.G; G.Sekhar Reddy; Dr.V.Malleswara Rao

    2011-01-01

    Designing high-speed low-power circuits with CMOS technology has been a major research problem for many years. Several logic families have been proposed and used to improve circuit performance beyond that of conventional static CMOS family. Fast circuit families are becoming attractive in deep sub micron technologies since the performance benefits obtained from process scaling are decreasing as feature size decreases. This paper presents CMOS differential circuit families such as Dual rail do...

  1. The scaling of performance and losses in miniature internal combustion engines

    Science.gov (United States)

    Menon, Shyam Kumar

    Miniature glow ignition internal combustion (IC) piston engines are an off--the--shelf technology that could dramatically increase the endurance of miniature electric power supplies and the range and endurance of small unmanned air vehicles provided their overall thermodynamic efficiencies can be increased to 15% or better. This thesis presents the first comprehensive analysis of small (system is developed that is capable of making reliable measurements of engine performance and losses in these small engines. Methodologies are also developed for measuring volumetric, heat transfer, exhaust, mechanical, and combustion losses. These instruments and techniques are used to investigate the performance of seven single-cylinder, two-stroke, glow fueled engines ranging in size from 15 to 450 g (0.16 to 7.5 cm3 displacement). Scaling rules for power output, overall efficiency, and normalized power are developed from the data. These will be useful to developers of micro-air vehicles and miniature power systems. The data show that the minimum length scale of a thermodynamically viable piston engine based on present technology is approximately 3 mm. Incomplete combustion is the most important challenge as it accounts for 60-70% of total energy losses. Combustion losses are followed in order of importance by heat transfer, sensible enthalpy, and friction. A net heat release analysis based on in-cylinder pressure measurements suggest that a two--stage combustion process occurs at low engine speeds and equivalence ratios close to 1. Different theories based on burning mode and reaction kinetics are proposed to explain the observed results. High speed imaging of the combustion chamber suggests that a turbulent premixed flame with its origin in the vicinity of the glow plug is the primary driver of combustion. Placing miniature IC engines on a turbulent combustion regime diagram shows that they operate in the 'flamelet in eddy' regime whereas conventional--scale engines operate

  2. On the construction of a 2-metre mirror blank for the universal reflecting telescope in Tautenburg (German Title: Über die Fertigung eines 2-Meter-Spiegelträgers für das Universal-Spiegelteleskop in Tautenburg )

    Science.gov (United States)

    Lödel, Wolfgang

    The astronomers' desire to penetrate deeper into space transforms into a demand for larger telescopes. The primary mirror constitutes the main part of a reflecting telescope, and it determines all subsequent activities. Already in the 1930s activities existed in the Schott company to manufacture mirror blanks up to diameters of 2 metres, which could not be pursued because of political constraints. This ambitious goal was again picked up a few years after the war. At a time when the procurement of raw materials was extremely difficult, the glass workers of Schott in Jena attacked this large project. After some failures, a good mirror blank could be delivered to the Carl Zeiss Company in 1951 for further processing and for the construction of the first 2-metre reflecting telescope. From 1960 to 1986, this mirror made from optical glass ZK7 served its purpose at the Karl Schwarzschild Observatory in Tautenburg. lt was then replaced by a zero expansion glass ceramics mirror.

  3. Low cost high performance uncertainty quantification

    KAUST Repository

    Bekas, C.

    2009-01-01

    Uncertainty quantification in risk analysis has become a key application. In this context, computing the diagonal of inverse covariance matrices is of paramount importance. Standard techniques, that employ matrix factorizations, incur a cubic cost which quickly becomes intractable with the current explosion of data sizes. In this work we reduce this complexity to quadratic with the synergy of two algorithms that gracefully complement each other and lead to a radically different approach. First, we turned to stochastic estimation of the diagonal. This allowed us to cast the problem as a linear system with a relatively small number of multiple right hand sides. Second, for this linear system we developed a novel, mixed precision, iterative refinement scheme, which uses iterative solvers instead of matrix factorizations. We demonstrate that the new framework not only achieves the much needed quadratic cost but in addition offers excellent opportunities for scaling at massively parallel environments. We based our implementation on BLAS 3 kernels that ensure very high processor performance. We achieved a peak performance of 730 TFlops on 72 BG/P racks, with a sustained performance 73% of theoretical peak. We stress that the techniques presented in this work are quite general and applicable to several other important applications. Copyright © 2009 ACM.

  4. Towards LarKC : A platform for Web-scale reasoning

    NARCIS (Netherlands)

    Fensel, Dieter; Van Harmelen, Frank; Andersson, Bo; Brennan, Paul; Cunningham, Hamish; Valle, Emanuele Delia; Fischer, Florian; Huang, Zhisheng; Kiryakov, Atanas; Lee, Tony Kyung Il; Schooler, Lael; Tresp, Volker; Wesner, Stefan; Witbrock, Michael; Zhong, Ning

    2008-01-01

    Current Semantic Web reasoning systems do not scale to the requirements of their hottest applications, such as analyzing data from millions of mobile devices, dealing with terabytes of scientific data, and content management in enterprises with thousands of knowledge workers. In this paper, we

  5. BurstMem: A High-Performance Burst Buffer System for Scientific Applications

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Teng [Auburn University, Auburn, Alabama; Oral, H Sarp [ORNL; Wang, Yandong [Auburn University, Auburn, Alabama; Settlemyer, Bradley W [ORNL; Atchley, Scott [ORNL; Yu, Weikuan [Auburn University, Auburn, Alabama

    2014-01-01

    The growth of computing power on large-scale sys- tems requires commensurate high-bandwidth I/O system. Many parallel file systems are designed to provide fast sustainable I/O in response to applications soaring requirements. To meet this need, a novel system is imperative to temporarily buffer the bursty I/O and gradually flush datasets to long-term parallel file systems. In this paper, we introduce the design of BurstMem, a high- performance burst buffer system. BurstMem provides a storage framework with efficient storage and communication manage- ment strategies. Our experiments demonstrate that BurstMem is able to speed up the I/O performance of scientific applications by up to 8.5 on leadership computer systems.

  6. INFLUENCE OF SCALE RATIO, ASPECT RATIO, AND PLANFORM ON THE PERFORMANCE OF SUPERCAVITATING HYDROFOILS.

    Science.gov (United States)

    performance of supercavitating hydrofoils. No appreciable scale effect was found for scale ratios up to 3 in the fully-cavitating flow region. The...overall performance of the hydrofoil by increasing the aspect ratio above 3, and (2) moderate taper ratio seems to be advantageous in view of the overall performance of supercavitating hydrofoils. (Author)

  7. Reading Fluency as a Predictor of Reading Proficiency in Low-Performing, High-Poverty Schools

    Science.gov (United States)

    Baker, Scott K.; Smolkowski, Keith; Katz, Rachell; Fien, Hank; Seeley, John R.; Kame'enui, Edward J.; Beck, Carrie Thomas

    2008-01-01

    The purpose of this study was to examine oral reading fluency (ORF) in the context of a large-scale federal reading initiative conducted in low performing, high poverty schools. The objectives were to (a) investigate the relation between ORF and comprehensive reading tests, (b) examine whether slope of performance over time on ORF predicted…

  8. Development of an evaluation performance scale for social educators in child protection centers

    Directory of Open Access Journals (Sweden)

    Juan Manuel Fernández Millán

    2013-09-01

    Full Text Available Purpose: In a context of economic crisis, as in the case of Spain, the evaluation of the performance of employees in any field is a key tool for improving worker efficiency. For those professions that are developed in order to provide basic social services to the people of its importance is even greater. Thereby, this study is focused on developing a Performance Rating Scale of Social Workers using BARS technique. Design/methodology/approach: We asked 11 experts to list those competencies they believed necessary to perform this task efficiently. Thereafter, we selected competencies what coincide with an Interjudge arrangement of at least 3. Then each competency was associated with two critical incidents and developed corresponding behavioral anchors. In addition, the scale has a collection of personal data such as age and time off work, often used as indicators defining performance. Finally, the scale was tested to a sample of 128 Social Workers working in interim child care centers and children and youth correctional centers. Findings and Originality/value: The results show that the scale meets the criteria required for validation psychometric (α= 0,873. Also, the scale could be factored (Kaiser-Meyer-Olkin=0,810. Three dimensions were obtained: team work, interpersonal skills and competencies of the work. Research limitations/implications: An appreciation of the lack of covariation between external criteria used as identifiers of good performance (age and number of sick leave and the employee's competence. This confirms the inadequacy of these criteria to predict performance. Necessitating the development of performance evaluation tools that include absenteeism and experience as predictors of performance measures. Practical implications: The inadequacy may be due to the usually confusion between work experience - seniority and sick leave - absenteeism. Originality/value: The study helps define the figure and the competences of social

  9. Exotic Fish in Exotic Plantations: A Multi-Scale Approach to Understand Amphibian Occurrence in the Mediterranean Region.

    Directory of Open Access Journals (Sweden)

    Joana Cruz

    Full Text Available Globally, amphibian populations are threatened by a diverse range of factors including habitat destruction and alteration. Forestry practices have been linked with low diversity and abundance of amphibians. The effect of exotic Eucalyptus spp. plantations on amphibian communities has been studied in a number of biodiversity hotspots, but little is known of its impact in the Mediterranean region. Here, we identify the environmental factors influencing the presence of six species of amphibians (the Caudata Pleurodeles waltl, Salamandra salamandra, Lissotriton boscai, Triturus marmoratus and the anurans Pelobates cultripes and Hyla arborea/meridionalis occupying 88 ponds. The study was conducted in a Mediterranean landscape dominated by eucalypt plantations alternated with traditional use (agricultural, montados and native forest at three different scales: local (pond, intermediate (400 metres radius buffer and broad (1000 metres radius buffer. Using the Akaike Information Criterion for small samples (AICc, we selected the top-ranked models for estimating the probability of occurrence of each species at each spatial scale separately and across all three spatial scales, using a combination of covariates from the different magnitudes. Models with a combination of covariates at the different spatial scales had a stronger support than those at individual scales. The presence of predatory fish in a pond had a strong effect on Caudata presence. Permanent ponds were selected by Hyla arborea/meridionalis over temporary ponds. Species occurrence was not increased by a higher density of streams, but the density of ponds impacted negatively on Lissotriton boscai. The proximity of ponds occupied by their conspecifics had a positive effect on the occurrence of Lissotriton boscai and Pleurodeles waltl. Eucalypt plantations had a negative effect on the occurrence of the newt Lissotriton boscai and anurans Hyla arborea/meridionalis, but had a positive effect on

  10. Natural tracer profiles across argillaceous formations: the Claytrac project

    International Nuclear Information System (INIS)

    Mazurek, M.; Alt-Epping, P.; Gimi, Th.; Niklaus Waber, H.; Bath, A.; Gimmi, Th.

    2009-01-01

    Disposal of high-level radioactive waste and spent nuclear fuel in engineered facilities, or repositories, located deep underground in suitable geological formations is being developed worldwide as the reference solution to protect humans and the environment both now and in the future. An important aspect of assessing the long-term safety of deep geological disposal is developing a comprehensive understanding of the geological environment in order to define the initial conditions for the disposal system as well as to provide a sound scientific basis for projecting its future evolution. The transport pathways and mechanisms by which contaminants could migrate in the surrounding host rock are key elements in any safety case. Relevant experiments in laboratories or underground test facilities can provide important information, but the challenge remains in being able to extrapolate the results to the spatial and temporal scales required for performance assessment, which are typically tens to hundreds of metres and from thousands to beyond a million years into the future. Profiles of natural tracers dissolved in pore water of argillaceous rock formations can be considered as large-scale and long-term natural experiments which enable the transport properties to be characterised. The CLAYTRAC Project on Natural Tracer Profiles Across Argillaceous Formations was established by the NEA Clay Club to evaluate the relevance of natural tracer data in understanding past geological evolution and in confirming dominant transport processes. Data were analysed for nine sites to support scientific understanding and development of geological disposal. The outcomes of the project show that, for the sites and clay-rich formations that were studied, there is strong evidence that solute transport is controlled mainly by diffusion. The results can improve site understanding and performance assessment in the context of deep geological disposal and have the potential to be applied to other

  11. High performance systems

    Energy Technology Data Exchange (ETDEWEB)

    Vigil, M.B. [comp.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  12. High-performance flat-panel solar thermoelectric generators with high thermal concentration

    Science.gov (United States)

    Kraemer, Daniel; Poudel, Bed; Feng, Hsien-Ping; Caylor, J. Christopher; Yu, Bo; Yan, Xiao; Ma, Yi; Wang, Xiaowei; Wang, Dezhi; Muto, Andrew; McEnaney, Kenneth; Chiesa, Matteo; Ren, Zhifeng; Chen, Gang

    2011-07-01

    The conversion of sunlight into electricity has been dominated by photovoltaic and solar thermal power generation. Photovoltaic cells are deployed widely, mostly as flat panels, whereas solar thermal electricity generation relying on optical concentrators and mechanical heat engines is only seen in large-scale power plants. Here we demonstrate a promising flat-panel solar thermal to electric power conversion technology based on the Seebeck effect and high thermal concentration, thus enabling wider applications. The developed solar thermoelectric generators (STEGs) achieved a peak efficiency of 4.6% under AM1.5G (1 kW m-2) conditions. The efficiency is 7-8 times higher than the previously reported best value for a flat-panel STEG, and is enabled by the use of high-performance nanostructured thermoelectric materials and spectrally-selective solar absorbers in an innovative design that exploits high thermal concentration in an evacuated environment. Our work opens up a promising new approach which has the potential to achieve cost-effective conversion of solar energy into electricity.

  13. High-performance flat-panel solar thermoelectric generators with high thermal concentration.

    Science.gov (United States)

    Kraemer, Daniel; Poudel, Bed; Feng, Hsien-Ping; Caylor, J Christopher; Yu, Bo; Yan, Xiao; Ma, Yi; Wang, Xiaowei; Wang, Dezhi; Muto, Andrew; McEnaney, Kenneth; Chiesa, Matteo; Ren, Zhifeng; Chen, Gang

    2011-05-01

    The conversion of sunlight into electricity has been dominated by photovoltaic and solar thermal power generation. Photovoltaic cells are deployed widely, mostly as flat panels, whereas solar thermal electricity generation relying on optical concentrators and mechanical heat engines is only seen in large-scale power plants. Here we demonstrate a promising flat-panel solar thermal to electric power conversion technology based on the Seebeck effect and high thermal concentration, thus enabling wider applications. The developed solar thermoelectric generators (STEGs) achieved a peak efficiency of 4.6% under AM1.5G (1 kW m(-2)) conditions. The efficiency is 7-8 times higher than the previously reported best value for a flat-panel STEG, and is enabled by the use of high-performance nanostructured thermoelectric materials and spectrally-selective solar absorbers in an innovative design that exploits high thermal concentration in an evacuated environment. Our work opens up a promising new approach which has the potential to achieve cost-effective conversion of solar energy into electricity. © 2011 Macmillan Publishers Limited. All rights reserved

  14. High-performance parallel processors based on star-coupled wavelength division multiplexing optical interconnects

    Science.gov (United States)

    Deri, Robert J.; DeGroot, Anthony J.; Haigh, Ronald E.

    2002-01-01

    As the performance of individual elements within parallel processing systems increases, increased communication capability between distributed processor and memory elements is required. There is great interest in using fiber optics to improve interconnect communication beyond that attainable using electronic technology. Several groups have considered WDM, star-coupled optical interconnects. The invention uses a fiber optic transceiver to provide low latency, high bandwidth channels for such interconnects using a robust multimode fiber technology. Instruction-level simulation is used to quantify the bandwidth, latency, and concurrency required for such interconnects to scale to 256 nodes, each operating at 1 GFLOPS performance. Performance scales have been shown to .apprxeq.100 GFLOPS for scientific application kernels using a small number of wavelengths (8 to 32), only one wavelength received per node, and achievable optoelectronic bandwidth and latency.

  15. Flexible event reconstruction software chains with the ALICE High-Level Trigger

    International Nuclear Information System (INIS)

    Ram, D; Breitner, T; Szostak, A

    2012-01-01

    The ALICE High-Level Trigger (HLT) has a large high-performance computing cluster at CERN whose main objective is to perform real-time analysis on the data generated by the ALICE experiment and scale it down to at-most 4GB/sec - which is the current maximum mass-storage bandwidth available. Data-flow in this cluster is controlled by a custom designed software framework. It consists of a set of components which can communicate with each other via a common control interface. The software framework also supports the creation of different configurations based on the detectors participating in the HLT. These configurations define a logical data processing “chain” of detector data-analysis components. Data flows through this software chain in a pipelined fashion so that several events can be processed at the same time. An instance of such a chain can run and manage a few thousand physics analysis and data-flow components. The HLT software and the configuration scheme used in the 2011 heavy-ion runs of ALICE, has been discussed in this contribution.

  16. Improving corrosion resistance of post-tensioned substructures emphasizing high performance grouts

    Science.gov (United States)

    Schokker, Andrea Jeanne

    The use of post-tensioning in bridges can provide durability and structural benefits to the system while expediting the construction process. When post-tensioning is combined with precast elements, traffic interference can be greatly reduced through rapid construction. Post-tensioned concrete substructure elements such as bridge piers, hammerhead bents, and straddle bents have become more prevalent in recent years. Chloride induced corrosion of steel in concrete is one of the most costly forms of corrosion each year. Coastal substructure elements are exposed to seawater by immersion or spray, and inland bridges may also be at risk due to the application of deicing salts. Corrosion protection of the post-tensioning system is vital to the integrity of the structure because loss of post-tensioning can result in catastrophic failure. Documentation for durability design of the grout, ducts, and anchorage systems is very limited. The objective of this research is to evaluate the effectiveness of corrosion protection measures for post-tensioned concrete substructures by designing and testing specimens representative of typical substructure elements using state-of-the-art practices in aggressive chloride exposure environments. This was accomplished through exposure testing of twenty-seven large-scale beam specimens and ten large-scale column specimens. High performance grout for post-tensioning tendon injection was also developed through a series of fresh property tests, accelerated exposure tests, and a large-scale pumping test to simulate field conditions. A high performance fly ash grout was developed for applications with small vertical rises, and a high performance anti-bleed grout was developed for applications involving large vertical rises such as tall bridge piers. Long-term exposure testing of the beam and column specimens is ongoing, but preliminary findings indicate increased corrosion protection with increasing levels of post-tensioning, although traditional

  17. High performance germanium MOSFETs

    International Nuclear Information System (INIS)

    Saraswat, Krishna; Chui, Chi On; Krishnamohan, Tejas; Kim, Donghyun; Nayfeh, Ammar; Pethe, Abhijit

    2006-01-01

    Ge is a very promising material as future channel materials for nanoscale MOSFETs due to its high mobility and thus a higher source injection velocity, which translates into higher drive current and smaller gate delay. However, for Ge to become main-stream, surface passivation and heterogeneous integration of crystalline Ge layers on Si must be achieved. We have demonstrated growth of fully relaxed smooth single crystal Ge layers on Si using a novel multi-step growth and hydrogen anneal process without any graded buffer SiGe layer. Surface passivation of Ge has been achieved with its native oxynitride (GeO x N y ) and high-permittivity (high-k) metal oxides of Al, Zr and Hf. High mobility MOSFETs have been demonstrated in bulk Ge with high-k gate dielectrics and metal gates. However, due to their smaller bandgap and higher dielectric constant, most high mobility materials suffer from large band-to-band tunneling (BTBT) leakage currents and worse short channel effects. We present novel, Si and Ge based heterostructure MOSFETs, which can significantly reduce the BTBT leakage currents while retaining high channel mobility, making them suitable for scaling into the sub-15 nm regime. Through full band Monte-Carlo, Poisson-Schrodinger and detailed BTBT simulations we show a dramatic reduction in BTBT and excellent electrostatic control of the channel, while maintaining very high drive currents in these highly scaled heterostructure DGFETs. Heterostructure MOSFETs with varying strained-Ge or SiGe thickness, Si cap thickness and Ge percentage were fabricated on bulk Si and SOI substrates. The ultra-thin (∼2 nm) strained-Ge channel heterostructure MOSFETs exhibited >4x mobility enhancements over bulk Si devices and >10x BTBT reduction over surface channel strained SiGe devices

  18. High performance germanium MOSFETs

    Energy Technology Data Exchange (ETDEWEB)

    Saraswat, Krishna [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States)]. E-mail: saraswat@stanford.edu; Chui, Chi On [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Krishnamohan, Tejas [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Kim, Donghyun [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Nayfeh, Ammar [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Pethe, Abhijit [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States)

    2006-12-15

    Ge is a very promising material as future channel materials for nanoscale MOSFETs due to its high mobility and thus a higher source injection velocity, which translates into higher drive current and smaller gate delay. However, for Ge to become main-stream, surface passivation and heterogeneous integration of crystalline Ge layers on Si must be achieved. We have demonstrated growth of fully relaxed smooth single crystal Ge layers on Si using a novel multi-step growth and hydrogen anneal process without any graded buffer SiGe layer. Surface passivation of Ge has been achieved with its native oxynitride (GeO {sub x}N {sub y} ) and high-permittivity (high-k) metal oxides of Al, Zr and Hf. High mobility MOSFETs have been demonstrated in bulk Ge with high-k gate dielectrics and metal gates. However, due to their smaller bandgap and higher dielectric constant, most high mobility materials suffer from large band-to-band tunneling (BTBT) leakage currents and worse short channel effects. We present novel, Si and Ge based heterostructure MOSFETs, which can significantly reduce the BTBT leakage currents while retaining high channel mobility, making them suitable for scaling into the sub-15 nm regime. Through full band Monte-Carlo, Poisson-Schrodinger and detailed BTBT simulations we show a dramatic reduction in BTBT and excellent electrostatic control of the channel, while maintaining very high drive currents in these highly scaled heterostructure DGFETs. Heterostructure MOSFETs with varying strained-Ge or SiGe thickness, Si cap thickness and Ge percentage were fabricated on bulk Si and SOI substrates. The ultra-thin ({approx}2 nm) strained-Ge channel heterostructure MOSFETs exhibited >4x mobility enhancements over bulk Si devices and >10x BTBT reduction over surface channel strained SiGe devices.

  19. Development of Export Performance Scale for Fresh Vegetable-Fruit Sector

    Directory of Open Access Journals (Sweden)

    Murat Keskinkılınç

    2018-06-01

    Full Text Available The purpose of this paper is to propose a scale for assessing the performance of foreign trade companies in fresh vegetable-fruit sector. As a first step, a qualitative interviews was conducted to the sample consists of the managers working in export companies. As a result of the interviews major problems of exporters were grouped. In the second phase of the study a questionnaire was formed and a survey was conducted to the larger sample. Subsequently, validity and reliability of the scales were determined by means of explanatory and confirmatory factor analyses and reliability analysis respectively. The theoretical contribution of this research is the development of a method for evaluation of export performance of foreign trade companies in fresh vegetable-fruit sector.

  20. Do Performance-Safety Tradeoffs Cause Hypometric Metabolic Scaling in Animals?

    Science.gov (United States)

    Harrison, Jon F

    2017-09-01

    Hypometric scaling of aerobic metabolism in animals has been widely attributed to constraints on oxygen (O 2 ) supply in larger animals, but recent findings demonstrate that O 2 supply balances with need regardless of size. Larger animals also do not exhibit evidence of compensation for O 2 supply limitation. Because declining metabolic rates (MRs) are tightly linked to fitness, this provides significant evidence against the hypothesis that constraints on supply drive hypometric scaling. As an alternative, ATP demand might decline in larger animals because of performance-safety tradeoffs. Larger animals, which typically reproduce later, exhibit risk-reducing strategies that lower MR. Conversely, smaller animals are more strongly selected for growth and costly neurolocomotory performance, elevating metabolism. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Strategic options towards an affordable high-performance infrared camera

    Science.gov (United States)

    Oduor, Patrick; Mizuno, Genki; Dutta, Achyut K.; Lewis, Jay; Dhar, Nibir K.

    2016-05-01

    The promise of infrared (IR) imaging attaining low-cost akin to CMOS sensors success has been hampered by the inability to achieve cost advantages that are necessary for crossover from military and industrial applications into the consumer and mass-scale commercial realm despite well documented advantages. Banpil Photonics is developing affordable IR cameras by adopting new strategies to speed-up the decline of the IR camera cost curve. We present a new short-wave IR (SWIR) camera; 640x512 pixel InGaAs uncooled system that is high sensitivity low noise ( 500 frames per second (FPS)) at full resolution, and low power consumption (market adoption by not only demonstrating high-performance IR imaging capability value add demanded by military and industrial application, but also illuminates a path towards justifiable price points essential for consumer facing application industries such as automotive, medical, and security imaging adoption. Among the strategic options presented include new sensor manufacturing technologies that scale favorably towards automation, multi-focal plane array compatible readout electronics, and dense or ultra-small pixel pitch devices.

  2. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel's MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  3. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  4. Evaluation of anti-scale property of CrN coatings at high temperature and high pressure

    International Nuclear Information System (INIS)

    Honda, Tomomi; Iwai, Yoshiro; Uno, Ryoji; Yoshinaga, Shigeki

    2007-01-01

    It is well known that oxide scale which adheres to the inner wall of the nozzle in nuclear power plant causes a serious problem. This study was carried out to obtain the knowledge about initiation and deposition behavior of oxide scale on the surface of SUS304 stainless steel and the evaluation of anti-scale property of chromium nitride (CrN) coatings at high temperature and high pressure. SUS304 stainless steel and CrN coating specimens were heated in water up to 200degC for more than 250 hours. Obtained results are summarized as follows. Initiation of the scale started from corrosive part of SUS304 stainless steel and the scale grows by deposition of magnetite particles. CrN coating can be applied to prevent the initiation and deposition of oxide scale. (author)

  5. Sodium-immersed self-cooled electromagnetic pump design and development of a large-scale coil for high temperature

    International Nuclear Information System (INIS)

    Oto, Akihiro; Naohara, Nobuyuki; Ishida, Masayoshi; Katsuki, Kenji; Kumazawa, Ryouji

    1995-01-01

    A sodium-immersed, self-cooled electromagnetic (EM) pump was recently studied as a prospective innovative technology to simplify a fast breeder reactor plant system. The EM pump for a primary pump, a pump type, was designed, and the structural concept and the system performance were clarified. For the flow control method, a constant voltage/frequency method was preferable from the point of view of pump performance and efficiency. The insulation life was tested on a large-scale coil at high temperature as part of the development of a large-capacity EM pump. Mechanical and electrical damage were not observed, and the insulation performance was quite good. The insulation system could also be applied to large-scale coils

  6. Fabrication of high-resolution reflective scale grating for an optical encoder using a patterned self-assembly process

    International Nuclear Information System (INIS)

    Fan, Shanjin; Jiang, Weitao; Li, Xuan; Yu, Haoyu; Lei, Biao; Shi, Yongsheng; Yin, Lei; Chen, Bangdao; Liu, Hongzhong

    2016-01-01

    Steel tape scale grating of a reflective incremental linear encoder has a key impact on the measurement accuracy of the optical encoder. However, it is difficult for conventional manufacturing processes to fabricate scale grating with high-resolution grating strips, due to process and material problems. In this paper, self-assembly technology was employed to fabricate high-resolution steel tape scale grating for a reflective incremental linear encoder. Graphene oxide nanoparticles were adopted to form anti-reflective grating strips of steel tape scale grating. They were deposited in the tape, which had a hydrophobic and hydrophilic grating pattern when the dispersion of the nanoparticles evaporated. A standard lift-off process was employed to fabricate the hydrophobic grating strips on the steel tape. Simultaneously, the steel tape itself presents a hydrophilic property. The hydrophobic and hydrophilic grating pattern was thus obtained. In this study, octafluorocyclobutane was used to prepare the hydrophobic grating strips, due to its hydrophobic property. High-resolution graphene oxide steel tape scale grating with a pitch of 20 μ m was obtained through the self-assembly process. The photoelectric signals of the optical encoder containing the graphene oxide scale grating and conventional scale grating were tested under the same conditions. Comparison test results showed that the graphene oxide scale grating has a better performance in its amplitude and harmonic components than that of the conventional steel tape scale. A comparison experiment of position errors was also conducted, demonstrating an improvement in the positioning error of the graphene oxide scale grating. The comparison results demonstrated the applicability of the proposed self-assembly process to fabricate high-resolution graphene oxide scale grating for a reflective incremental linear encoder. (paper)

  7. Strategic Factor Markets Scale Free Resources and Economic Performance

    DEFF Research Database (Denmark)

    Geisler Asmussen, Christian

    2015-01-01

    This paper analyzes how scale free resources, which can be acquired by multiple firms simultaneously and deployed against one another in product market competition, will be priced in strategic factor markets, and what the consequences are for the acquiring firms' performance. Based on a game-theo...

  8. Large-Scale Off-Target Identification Using Fast and Accurate Dual Regularized One-Class Collaborative Filtering and Its Application to Drug Repurposing.

    Directory of Open Access Journals (Sweden)

    Hansaim Lim

    2016-10-01

    Full Text Available Target-based screening is one of the major approaches in drug discovery. Besides the intended target, unexpected drug off-target interactions often occur, and many of them have not been recognized and characterized. The off-target interactions can be responsible for either therapeutic or side effects. Thus, identifying the genome-wide off-targets of lead compounds or existing drugs will be critical for designing effective and safe drugs, and providing new opportunities for drug repurposing. Although many computational methods have been developed to predict drug-target interactions, they are either less accurate than the one that we are proposing here or computationally too intensive, thereby limiting their capability for large-scale off-target identification. In addition, the performances of most machine learning based algorithms have been mainly evaluated to predict off-target interactions in the same gene family for hundreds of chemicals. It is not clear how these algorithms perform in terms of detecting off-targets across gene families on a proteome scale. Here, we are presenting a fast and accurate off-target prediction method, REMAP, which is based on a dual regularized one-class collaborative filtering algorithm, to explore continuous chemical space, protein space, and their interactome on a large scale. When tested in a reliable, extensive, and cross-gene family benchmark, REMAP outperforms the state-of-the-art methods. Furthermore, REMAP is highly scalable. It can screen a dataset of 200 thousands chemicals against 20 thousands proteins within 2 hours. Using the reconstructed genome-wide target profile as the fingerprint of a chemical compound, we predicted that seven FDA-approved drugs can be repurposed as novel anti-cancer therapies. The anti-cancer activity of six of them is supported by experimental evidences. Thus, REMAP is a valuable addition to the existing in silico toolbox for drug target identification, drug repurposing

  9. Large-Scale Off-Target Identification Using Fast and Accurate Dual Regularized One-Class Collaborative Filtering and Its Application to Drug Repurposing.

    Science.gov (United States)

    Lim, Hansaim; Poleksic, Aleksandar; Yao, Yuan; Tong, Hanghang; He, Di; Zhuang, Luke; Meng, Patrick; Xie, Lei

    2016-10-01

    Target-based screening is one of the major approaches in drug discovery. Besides the intended target, unexpected drug off-target interactions often occur, and many of them have not been recognized and characterized. The off-target interactions can be responsible for either therapeutic or side effects. Thus, identifying the genome-wide off-targets of lead compounds or existing drugs will be critical for designing effective and safe drugs, and providing new opportunities for drug repurposing. Although many computational methods have been developed to predict drug-target interactions, they are either less accurate than the one that we are proposing here or computationally too intensive, thereby limiting their capability for large-scale off-target identification. In addition, the performances of most machine learning based algorithms have been mainly evaluated to predict off-target interactions in the same gene family for hundreds of chemicals. It is not clear how these algorithms perform in terms of detecting off-targets across gene families on a proteome scale. Here, we are presenting a fast and accurate off-target prediction method, REMAP, which is based on a dual regularized one-class collaborative filtering algorithm, to explore continuous chemical space, protein space, and their interactome on a large scale. When tested in a reliable, extensive, and cross-gene family benchmark, REMAP outperforms the state-of-the-art methods. Furthermore, REMAP is highly scalable. It can screen a dataset of 200 thousands chemicals against 20 thousands proteins within 2 hours. Using the reconstructed genome-wide target profile as the fingerprint of a chemical compound, we predicted that seven FDA-approved drugs can be repurposed as novel anti-cancer therapies. The anti-cancer activity of six of them is supported by experimental evidences. Thus, REMAP is a valuable addition to the existing in silico toolbox for drug target identification, drug repurposing, phenotypic screening, and

  10. A regiâo cresce mais que a metrópole

    Directory of Open Access Journals (Sweden)

    M. Santos

    1992-01-01

    Full Text Available Durante mucho tiempo, las teorías especiales clásicas (polos de desarrollo, teoría del lugar central, rank size rule, polo periferia consideraron a las grandes ciudades como centros de crecimiento, mientras que el resto del país tendría dificultad para despegar. Sin embargo, hoy, si se considera la evolución de varios indicadores económicos y sociales de las diferentes áreas metropolitanas brasileñas en relación con el resto del país, (particularmente las zonas del interior que empiezan a experimentar la modernización, se podría llamar al proceso por el que pasan como involución metropolitana. Esta sería una consecuencia del resultado de la difusión en el territorio de aquello que se ha llamado medio científico–técnico, que es consecuencia de la difusión a escala mundial de las variables que caracterizan el presente periodo histórico. De acuerdo con lo anterior, las transformaciones recientemente ocurridas en la superficie de la tierra en un gran número de lugares, estarían marcadas por las contribuciones de la ciencia y la tecnología, de tal manera que los lugares se caracterizan en función de las diferencias de información que hay entre ellos. Se estaría produciendo, por lo tanto, un cambio fundamental en el medio geográfico, dejando de ser simplemente un medio natural o un medio técnico para transformarse en un medio técnico–científico internacional. Sobre la base de lo anterior la naturaleza de la vida contribuye a la formación de nuevas relaciones sociales que traen consecuencias al proceso de urbanización. En este trabajo se presenta el caso brasileño donde se puede apuntar como resultado de la involución metropolitana antes mencionada, un fenómeno paralelo e interdependiente que es un crecimiento de la región más importante que el de la metrópoli y de una tendencia a tener una calidad de vida mejor en el interior.

  11. High-Performance solar-blind flexible Deep-UV photodetectors based on quantum dots synthesized by femtosecond-laser ablation

    KAUST Repository

    Mitra, Somak

    2018-03-31

    High-performance deep ultraviolet (DUV) photodetectors operating at ambient conditions with < 280nm detection wavelengths are in high demand because of their potential applications in diverse fields. We demonstrate for the first time, high-performance flexible DUV photodetectors operating at ambient conditions based on quantum dots (QDs) synthesized by femtosecond-laser ablation in liquid (FLAL) technique. Our method is facile without complex chemical procedures, which allows large-scale cost-effective devices. This synthesis method is demonstrated to produce highly stable and reproducible ZnO QDs from zinc nitride target (Zn3N2) without any material degradation due to water and oxygen molecule species, allowing photodetectors operate at ambient conditions. Carbon-doped ZnO QD-based photodetector is capable of detecting efficiently in the DUV spectral region, down to 224nm, and exhibits high photo responsivity and stability. As fast response of DUV photodetector remains significant parameter for high-speed communication; we show fast-response QD-based DUV photodetector. Such surfactant-free synthesis by FLAL can lead to commercially available high-performance low-cost optoelectronic devices based on nanostructures for large scale applications.

  12. Progress in scale-up of second-generation high-temperature superconductors at SuperPower Inc

    International Nuclear Information System (INIS)

    Xie, Y.-Y.; Knoll, A.; Chen, Y.; Li, Y.; Xiong, X.; Qiao, Y.; Hou, P.; Reeves, J.; Salagaj, T.; Lenseth, K.; Civale, L.; Maiorov, B.; Iwasa, Y.; Solovyov, V.; Suenaga, M.; Cheggour, N.; Clickner, C.; Ekin, J.W.; Weber, C.; Selvamanickam, V.

    2005-01-01

    SuperPower is focused on scaling up second-generation (2-G) high-temperature superconductor (HTS) technology to pilot-scale manufacturing. The emphasis of this program is to develop R and D solutions for scale-up issues in pilot-scale operations to lay the foundation for a framework for large-scale manufacturing. Throughput continues to be increased in all process steps including substrate polishing, buffer and HTS deposition. 2-G HTS conductors have been produced in lengths up to 100 m. Process optimization with valuable information provided by several unique process control and quality-control tools has yielded performances of 6000-7000 A m (77 K, 0 T) in 50-100 m lengths using two HTS fabrication processes: metal organic chemical vapor deposition (MOCVD) and pulsed laser deposition (PLD). Major progress has been made towards the development of practical conductor configurations. Modifications to the HTS fabrication process have resulted in enhanced performance in magnetic fields. Industrial slitting and electroplating processes have been successfully adopted to fabricate tapes in width of 4 mm and with copper stabilizer for cable and coil applications. SuperPower's conductor configuration has yielded excellent mechanical properties and overcurrent carrying capability. Over 60 m of such practical conductors with critical current over 100 A/cm-width have been delivered to Sumitomo Electric Industries, Ltd. for prototype cable construction

  13. Progress in scale-up of second-generation high-temperature superconductors at SuperPower Inc

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Y.-Y. [SuperPower Inc., 450 Duane Ave., Schenectady, NY 12304 (United States)]. E-mail: yxie@igc.com; Knoll, A. [SuperPower Inc., 450 Duane Ave., Schenectady, NY 12304 (United States); Chen, Y. [SuperPower Inc., 450 Duane Ave., Schenectady, NY 12304 (United States); Li, Y. [SuperPower Inc., 450 Duane Ave., Schenectady, NY 12304 (United States); Xiong, X. [SuperPower Inc., 450 Duane Ave., Schenectady, NY 12304 (United States); Qiao, Y. [SuperPower Inc., 450 Duane Ave., Schenectady, NY 12304 (United States); Hou, P. [SuperPower Inc., 450 Duane Ave., Schenectady, NY 12304 (United States); Reeves, J. [SuperPower Inc., 450 Duane Ave., Schenectady, NY 12304 (United States); Salagaj, T. [SuperPower Inc., 450 Duane Ave., Schenectady, NY 12304 (United States); Lenseth, K. [SuperPower Inc., 450 Duane Ave., Schenectady, NY 12304 (United States); Civale, L. [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Maiorov, B. [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Iwasa, Y. [Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Solovyov, V. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Suenaga, M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Cheggour, N. [National Institute of Standards and Technology, Boulder, CO 80305 (United States); Clickner, C. [National Institute of Standards and Technology, Boulder, CO 80305 (United States); Ekin, J.W. [National Institute of Standards and Technology, Boulder, CO 80305 (United States); Weber, C. [SuperPower Inc., 450 Duane Ave., Schenectady, NY 12304 (United States); Selvamanickam, V. [SuperPower Inc., 450 Duane Ave., Schenectady, NY 12304 (United States)

    2005-10-01

    SuperPower is focused on scaling up second-generation (2-G) high-temperature superconductor (HTS) technology to pilot-scale manufacturing. The emphasis of this program is to develop R and D solutions for scale-up issues in pilot-scale operations to lay the foundation for a framework for large-scale manufacturing. Throughput continues to be increased in all process steps including substrate polishing, buffer and HTS deposition. 2-G HTS conductors have been produced in lengths up to 100 m. Process optimization with valuable information provided by several unique process control and quality-control tools has yielded performances of 6000-7000 A m (77 K, 0 T) in 50-100 m lengths using two HTS fabrication processes: metal organic chemical vapor deposition (MOCVD) and pulsed laser deposition (PLD). Major progress has been made towards the development of practical conductor configurations. Modifications to the HTS fabrication process have resulted in enhanced performance in magnetic fields. Industrial slitting and electroplating processes have been successfully adopted to fabricate tapes in width of 4 mm and with copper stabilizer for cable and coil applications. SuperPower's conductor configuration has yielded excellent mechanical properties and overcurrent carrying capability. Over 60 m of such practical conductors with critical current over 100 A/cm-width have been delivered to Sumitomo Electric Industries, Ltd. for prototype cable construction.

  14. Design and Performance of Insect-Scale Flapping-Wing Vehicles

    Science.gov (United States)

    Whitney, John Peter

    Micro-air vehicles (MAVs)---small versions of full-scale aircraft---are the product of a continued path of miniaturization which extends across many fields of engineering. Increasingly, MAVs approach the scale of small birds, and most recently, their sizes have dipped into the realm of hummingbirds and flying insects. However, these non-traditional biologically-inspired designs are without well-established design methods, and manufacturing complex devices at these tiny scales is not feasible using conventional manufacturing methods. This thesis presents a comprehensive investigation of new MAV design and manufacturing methods, as applicable to insect-scale hovering flight. New design methods combine an energy-based accounting of propulsion and aerodynamics with a one degree-of-freedom dynamic flapping model. Important results include analytical expressions for maximum flight endurance and range, and predictions for maximum feasible wing size and body mass. To meet manufacturing constraints, the use of passive wing dynamics to simplify vehicle design and control was investigated; supporting tests included the first synchronized measurements of real-time forces and three-dimensional kinematics generated by insect-scale flapping wings. These experimental methods were then expanded to study optimal wing shapes and high-efficiency flapping kinematics. To support the development of high-fidelity test devices and fully-functional flight hardware, a new class of manufacturing methods was developed, combining elements of rigid-flex printed circuit board fabrication with "pop-up book" folding mechanisms. In addition to their current and future support of insect-scale MAV development, these new manufacturing techniques are likely to prove an essential element to future advances in micro-optomechanics, micro-surgery, and many other fields.

  15. Challenges in Improving Cochlear Implant Performance and Accessibility.

    Science.gov (United States)

    Zeng, Fan-Gang

    2017-08-01

    Here I identify two gaps in cochlear implants that have been limiting their performance and acceptance. First, cochlear implant performance has remained largely unchanged, despite the number of publications tripling per decade in the last 30 years. Little has been done so far to address a fundamental limitation in the electrode-to-neuron interface, with the electrode size being a thousand times larger than the neuron diameter while the number of electrodes being a thousand times less. Both the small number and the large size of electrodes produce broad spatial activation and poor frequency resolution that limit current cochlear implant performance. Second, a similarly rapid growth in cochlear implant volume has not produced an expected decrease in unit price in the same period. The high cost contributes to low market penetration rate, which is about 20% in developed countries and less than 1% in developing countries. I will discuss changes needed in both research strategy and business practice to close the gap between prosthetic and normal hearing as well as that between haves and have-nots.

  16. Large-scale hydrology in Europe : observed patterns and model performance

    Energy Technology Data Exchange (ETDEWEB)

    Gudmundsson, Lukas

    2011-06-15

    In a changing climate, terrestrial water storages are of great interest as water availability impacts key aspects of ecosystem functioning. Thus, a better understanding of the variations of wet and dry periods will contribute to fully grasp processes of the earth system such as nutrient cycling and vegetation dynamics. Currently, river runoff from small, nearly natural, catchments is one of the few variables of the terrestrial water balance that is regularly monitored with detailed spatial and temporal coverage on large scales. River runoff, therefore, provides a foundation to approach European hydrology with respect to observed patterns on large scales, with regard to the ability of models to capture these.The analysis of observed river flow from small catchments, focused on the identification and description of spatial patterns of simultaneous temporal variations of runoff. These are dominated by large-scale variations of climatic variables but also altered by catchment processes. It was shown that time series of annual low, mean and high flows follow the same atmospheric drivers. The observation that high flows are more closely coupled to large scale atmospheric drivers than low flows, indicates the increasing influence of catchment properties on runoff under dry conditions. Further, it was shown that the low-frequency variability of European runoff is dominated by two opposing centres of simultaneous variations, such that dry years in the north are accompanied by wet years in the south.Large-scale hydrological models are simplified representations of our current perception of the terrestrial water balance on large scales. Quantification of the models strengths and weaknesses is the prerequisite for a reliable interpretation of simulation results. Model evaluations may also enable to detect shortcomings with model assumptions and thus enable a refinement of the current perception of hydrological systems. The ability of a multi model ensemble of nine large-scale

  17. A checkpoint compression study for high-performance computing systems

    Energy Technology Data Exchange (ETDEWEB)

    Ibtesham, Dewan [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science; Ferreira, Kurt B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Scalable System Software Dept.; Arnold, Dorian [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science

    2015-02-17

    As high-performance computing systems continue to increase in size and complexity, higher failure rates and increased overheads for checkpoint/restart (CR) protocols have raised concerns about the practical viability of CR protocols for future systems. Previously, compression has proven to be a viable approach for reducing checkpoint data volumes and, thereby, reducing CR protocol overhead leading to improved application performance. In this article, we further explore compression-based CR optimization by exploring its baseline performance and scaling properties, evaluating whether improved compression algorithms might lead to even better application performance and comparing checkpoint compression against and alongside other software- and hardware-based optimizations. Our results highlights are: (1) compression is a very viable CR optimization; (2) generic, text-based compression algorithms appear to perform near optimally for checkpoint data compression and faster compression algorithms will not lead to better application performance; (3) compression-based optimizations fare well against and alongside other software-based optimizations; and (4) while hardware-based optimizations outperform software-based ones, they are not as cost effective.

  18. The Harvest suite for rapid core-genome alignment and visualization of thousands of intraspecific microbial genomes.

    Science.gov (United States)

    Treangen, Todd J; Ondov, Brian D; Koren, Sergey; Phillippy, Adam M

    2014-01-01

    Whole-genome sequences are now available for many microbial species and clades, however existing whole-genome alignment methods are limited in their ability to perform sequence comparisons of multiple sequences simultaneously. Here we present the Harvest suite of core-genome alignment and visualization tools for the rapid and simultaneous analysis of thousands of intraspecific microbial strains. Harvest includes Parsnp, a fast core-genome multi-aligner, and Gingr, a dynamic visual platform. Together they provide interactive core-genome alignments, variant calls, recombination detection, and phylogenetic trees. Using simulated and real data we demonstrate that our approach exhibits unrivaled speed while maintaining the accuracy of existing methods. The Harvest suite is open-source and freely available from: http://github.com/marbl/harvest.

  19. Predicting Pollicipes pollicipes (Crustacea: Cirripedia abundance on intertidal rocky shores of SW Portugal: a multi-scale approach based on a simple fetch-based wave exposure index

    Directory of Open Access Journals (Sweden)

    David Jacinto

    2016-06-01

    Full Text Available Understanding and predicting patterns of distribution and abundance of marine resources is important for conservation and management purposes in small-scale artisanal fisheries and industrial fisheries worldwide. The goose barnacle (Pollicipes pollicipes is an important shellfish resource and its distribution is closely related to wave exposure at different spatial scales. We modelled the abundance (percent coverage of P. pollicipes as a function of a simple wave exposure index based on fetch estimates from digitized coastlines at different spatial scales. The model accounted for 47.5% of the explained deviance and indicated that barnacle abundance increases non-linearly with wave exposure at both the smallest (metres and largest (kilometres spatial scales considered in this study. Distribution maps were predicted for the study region in SW Portugal. Our study suggests that the relationship between fetch-based exposure indices and P. pollicipes percent cover may be used as a simple tool for providing stakeholders with information on barnacle distribution patterns. This information may improve assessment of harvesting grounds and the dimension of exploitable areas, aiding management plans and supporting decision making on conservation, harvesting pressure and surveillance strategies for this highly appreciated and socio-economically important marine resource.

  20. A Mountain-Scale Monitoring Network for Yucca Mountain Performance Confirmation

    International Nuclear Information System (INIS)

    Freifeld, Barry; Tsang, Yvonne

    2006-01-01

    Confirmation of the performance of Yucca Mountain is required by 10 CFR Part 63.131 to indicate, where practicable, that the natural system acts as a barrier, as intended. Hence, performance confirmation monitoring and testing would provide data for continued assessment during the pre-closure period. In general, to carry out testing at a relevant scale is always important, and in the case of performance confirmation, it is particularly important to be able to test at the scale of the repository. We view the large perturbation caused by construction of the repository at Yucca Mountain as a unique opportunity to study the large-scale behavior of the natural barrier system. Repository construction would necessarily introduce traced fluids and result in the creation of leachates. A program to monitor traced fluids and construction leachates permits evaluation of transport through the unsaturated zone and potentially downgradient through the saturated zone. A robust sampling and monitoring network for continuous measurement of important parameters, and for periodic collection of agrochemical samples, is proposed to observe thermo-hydrogeochemical changes near the repository horizon and down to the water table. The sampling and monitoring network can be used to provide data to (1) assess subsurface conditions encountered and changes in those conditions during construction and waste emplacement operations; and (2) for modeling to determine that the natural system is functioning as intended

  1. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    International Nuclear Information System (INIS)

    Khaleel, Mohammad A.

    2009-01-01

    This report is an account of the deliberations and conclusions of the workshop on 'Forefront Questions in Nuclear Science and the Role of High Performance Computing' held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to (1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; (2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; (3) provide nuclear physicists the opportunity to influence the development of high performance computing; and (4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  2. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.

    2009-10-01

    This report is an account of the deliberations and conclusions of the workshop on "Forefront Questions in Nuclear Science and the Role of High Performance Computing" held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to 1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; 2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; 3) provide nuclear physicists the opportunity to influence the development of high performance computing; and 4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  3. Performance study of protective clothing against hot water splashes: from bench scale test to instrumented manikin test.

    Science.gov (United States)

    Lu, Yehu; Song, Guowen; Wang, Faming

    2015-03-01

    Hot liquid hazards existing in work environments are shown to be a considerable risk for industrial workers. In this study, the predicted protection from fabric was assessed by a modified hot liquid splash tester. In these tests, conditions with and without an air spacer were applied. The protective performance of a garment exposed to hot water spray was investigated by a spray manikin evaluation system. Three-dimensional body scanning technique was used to characterize the air gap size between the protective clothing and the manikin skin. The relationship between bench scale test and manikin test was discussed and the regression model was established to predict the overall percentage of skin burn while wearing protective clothing. The results demonstrated strong correlations between bench scale test and manikin test. Based on these studies, the overall performance of protective clothing against hot water spray can be estimated on the basis of the results of the bench scale hot water splashes test and the information of air gap size entrapped in clothing. The findings provide effective guides for the design and material selection while developing high performance protective clothing. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2014.

  4. The NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform to Support the Analysis of Petascale Environmental Data Collections

    Science.gov (United States)

    Evans, B. J. K.; Pugh, T.; Wyborn, L. A.; Porter, D.; Allen, C.; Smillie, J.; Antony, J.; Trenham, C.; Evans, B. J.; Beckett, D.; Erwin, T.; King, E.; Hodge, J.; Woodcock, R.; Fraser, R.; Lescinsky, D. T.

    2014-12-01

    The National Computational Infrastructure (NCI) has co-located a priority set of national data assets within a HPC research platform. This powerful in-situ computational platform has been created to help serve and analyse the massive amounts of data across the spectrum of environmental collections - in particular the climate, observational data and geoscientific domains. This paper examines the infrastructure, innovation and opportunity for this significant research platform. NCI currently manages nationally significant data collections (10+ PB) categorised as 1) earth system sciences, climate and weather model data assets and products, 2) earth and marine observations and products, 3) geosciences, 4) terrestrial ecosystem, 5) water management and hydrology, and 6) astronomy, social science and biosciences. The data is largely sourced from the NCI partners (who include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. By co-locating these large valuable data assets, new opportunities have arisen by harmonising the data collections, making a powerful transdisciplinary research platformThe data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. New scientific software, cloud-scale techniques, server-side visualisation and data services have been harnessed and integrated into the platform, so that analysis is performed seamlessly across the traditional boundaries of the underlying data domains. Characterisation of the techniques along with performance profiling ensures scalability of each software component, all of which can either be enhanced or replaced through future improvements. A Development-to-Operations (DevOps) framework has also been implemented to manage the scale of the software complexity alone. This ensures that

  5. The implementation of sea ice model on a regional high-resolution scale

    Science.gov (United States)

    Prasad, Siva; Zakharov, Igor; Bobby, Pradeep; McGuire, Peter

    2015-09-01

    The availability of high-resolution atmospheric/ocean forecast models, satellite data and access to high-performance computing clusters have provided capability to build high-resolution models for regional ice condition simulation. The paper describes the implementation of the Los Alamos sea ice model (CICE) on a regional scale at high resolution. The advantage of the model is its ability to include oceanographic parameters (e.g., currents) to provide accurate results. The sea ice simulation was performed over Baffin Bay and the Labrador Sea to retrieve important parameters such as ice concentration, thickness, ridging, and drift. Two different forcing models, one with low resolution and another with a high resolution, were used for the estimation of sensitivity of model results. Sea ice behavior over 7 years was simulated to analyze ice formation, melting, and conditions in the region. Validation was based on comparing model results with remote sensing data. The simulated ice concentration correlated well with Advanced Microwave Scanning Radiometer for EOS (AMSR-E) and Ocean and Sea Ice Satellite Application Facility (OSI-SAF) data. Visual comparison of ice thickness trends estimated from the Soil Moisture and Ocean Salinity satellite (SMOS) agreed with the simulation for year 2010-2011.

  6. High performance platinum single atom electrocatalyst for oxygen reduction reaction

    Science.gov (United States)

    Liu, Jing; Jiao, Menggai; Lu, Lanlu; Barkholtz, Heather M.; Li, Yuping; Wang, Ying; Jiang, Luhua; Wu, Zhijian; Liu, Di-Jia; Zhuang, Lin; Ma, Chao; Zeng, Jie; Zhang, Bingsen; Su, Dangsheng; Song, Ping; Xing, Wei; Xu, Weilin; Wang, Ying; Jiang, Zheng; Sun, Gongquan

    2017-07-01

    For the large-scale sustainable implementation of polymer electrolyte membrane fuel cells in vehicles, high-performance electrocatalysts with low platinum consumption are desirable for use as cathode material during the oxygen reduction reaction in fuel cells. Here we report a carbon black-supported cost-effective, efficient and durable platinum single-atom electrocatalyst with carbon monoxide/methanol tolerance for the cathodic oxygen reduction reaction. The acidic single-cell with such a catalyst as cathode delivers high performance, with power density up to 680 mW cm-2 at 80 °C with a low platinum loading of 0.09 mgPt cm-2, corresponding to a platinum utilization of 0.13 gPt kW-1 in the fuel cell. Good fuel cell durability is also observed. Theoretical calculations reveal that the main effective sites on such platinum single-atom electrocatalysts are single-pyridinic-nitrogen-atom-anchored single-platinum-atom centres, which are tolerant to carbon monoxide/methanol, but highly active for the oxygen reduction reaction.

  7. A refined TALDICE-1a age scale from 55 to 112 ka before present for the Talos Dome ice core based on high-resolution methane measurements

    Directory of Open Access Journals (Sweden)

    S. Schüpbach

    2011-09-01

    Full Text Available A precise synchronization of different climate records is indispensable for a correct dynamical interpretation of paleoclimatic data. A chronology for the TALDICE ice core from the Ross Sea sector of East Antarctica has recently been presented based on methane synchronization with Greenland and the EDC ice cores and δ18Oice synchronization with EDC in the bottom part (TALDICE-1. Using new high-resolution methane data obtained with a continuous flow analysis technique, we present a refined age scale for the age interval from 55–112 thousand years (ka before present, where TALDICE is synchronized with EDC. New and more precise tie points reduce the uncertainties of the age scale from up to 1900 yr in TALDICE-1 to below 1100 yr over most of the refined interval and shift the Talos Dome dating to significantly younger ages during the onset of Marine Isotope Stage 3. Thus, discussions of climate dynamics at sub-millennial time scales are now possible back to 110 ka, in particular during the inception of the last ice age. Calcium data of EDC and TALDICE are compared to show the impact of the refinement to the synchronization of the two ice cores not only for the gas but also for the ice age scale.

  8. Corrugation Architecture Enabled Ultraflexible Wafer-Scale High-Efficiency Monocrystalline Silicon Solar Cell

    KAUST Repository

    Bahabry, Rabab R.

    2018-01-02

    Advanced classes of modern application require new generation of versatile solar cells showcasing extreme mechanical resilience, large-scale, low cost, and excellent power conversion efficiency. Conventional crystalline silicon-based solar cells offer one of the most highly efficient power sources, but a key challenge remains to attain mechanical resilience while preserving electrical performance. A complementary metal oxide semiconductor-based integration strategy where corrugation architecture enables ultraflexible and low-cost solar cell modules from bulk monocrystalline large-scale (127 × 127 cm) silicon solar wafers with a 17% power conversion efficiency. This periodic corrugated array benefits from an interchangeable solar cell segmentation scheme which preserves the active silicon thickness of 240 μm and achieves flexibility via interdigitated back contacts. These cells can reversibly withstand high mechanical stress and can be deformed to zigzag and bifacial modules. These corrugation silicon-based solar cells offer ultraflexibility with high stability over 1000 bending cycles including convex and concave bending to broaden the application spectrum. Finally, the smallest bending radius of curvature lower than 140 μm of the back contacts is shown that carries the solar cells segments.

  9. Corrugation Architecture Enabled Ultraflexible Wafer-Scale High-Efficiency Monocrystalline Silicon Solar Cell

    KAUST Repository

    Bahabry, Rabab R.; Kutbee, Arwa T.; Khan, Sherjeel M.; Sepulveda, Adrian C.; Wicaksono, Irmandy; Nour, Maha A.; Wehbe, Nimer; Almislem, Amani Saleh Saad; Ghoneim, Mohamed T.; Sevilla, Galo T.; Syed, Ahad; Shaikh, Sohail F.; Hussain, Muhammad Mustafa

    2018-01-01

    Advanced classes of modern application require new generation of versatile solar cells showcasing extreme mechanical resilience, large-scale, low cost, and excellent power conversion efficiency. Conventional crystalline silicon-based solar cells offer one of the most highly efficient power sources, but a key challenge remains to attain mechanical resilience while preserving electrical performance. A complementary metal oxide semiconductor-based integration strategy where corrugation architecture enables ultraflexible and low-cost solar cell modules from bulk monocrystalline large-scale (127 × 127 cm) silicon solar wafers with a 17% power conversion efficiency. This periodic corrugated array benefits from an interchangeable solar cell segmentation scheme which preserves the active silicon thickness of 240 μm and achieves flexibility via interdigitated back contacts. These cells can reversibly withstand high mechanical stress and can be deformed to zigzag and bifacial modules. These corrugation silicon-based solar cells offer ultraflexibility with high stability over 1000 bending cycles including convex and concave bending to broaden the application spectrum. Finally, the smallest bending radius of curvature lower than 140 μm of the back contacts is shown that carries the solar cells segments.

  10. Optical interconnect for large-scale systems

    Science.gov (United States)

    Dress, William

    2013-02-01

    This paper presents a switchless, optical interconnect module that serves as a node in a network of identical distribution modules for large-scale systems. Thousands to millions of hosts or endpoints may be interconnected by a network of such modules, avoiding the need for multi-level switches. Several common network topologies are reviewed and their scaling properties assessed. The concept of message-flow routing is discussed in conjunction with the unique properties enabled by the optical distribution module where it is shown how top-down software control (global routing tables, spanning-tree algorithms) may be avoided.

  11. The team behind HALO, a large-scale art installation conceived at CERN and inspired by ATLAS data, exhibited at 2018 Art Basel.

    CERN Multimedia

    Marcelloni, Claudia

    2018-01-01

    Merging particle physics and art, a CERN-inspired artwork is being featured for the first time at Art Basel, the international art fair in Basel, Switzerland from 13 to 17 June. A large-scale immersive art installation entitled HALO is the artistic interpretation of the Large Hadron Collider’s ATLAS experiment and celebrates the links between art, science and technology. Inspired by raw data generated by ATLAS, the artwork has been conceived and executed by CERN’s former artists-in-residence, the “Semiconductor” duo Ruth Jarman and Joe Gerhardt, in collaboration with Mónica Bello, curator and head of Arts at CERN. During their three-month Arts at CERN residency in 2015, Semiconductor had the chance to explore particle-collision data in collaboration with scientists from the University of Sussex ATLAS group and work with them on the data later used in the artwork. HALO is a cylindrical structure, measuring ten metres in diameter and surrounded by 4-metre-long vertical piano wires. On the inside, an en...

  12. High performance nano-composite technology development

    International Nuclear Information System (INIS)

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D.; Kim, E. K.; Jung, S. Y.; Ryu, H. J.; Hwang, S. S.; Kim, J. K.; Hong, S. M.; Chea, Y. B.; Choi, C. H.; Kim, S. D.; Cho, B. G.; Lee, S. H.

    1999-06-01

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  13. High performance nano-composite technology development

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D. [KAERI, Taejon (Korea, Republic of); Kim, E. K.; Jung, S. Y.; Ryu, H. J. [KRICT, Taejon (Korea, Republic of); Hwang, S. S.; Kim, J. K.; Hong, S. M. [KIST, Taejon (Korea, Republic of); Chea, Y. B. [KIGAM, Taejon (Korea, Republic of); Choi, C. H.; Kim, S. D. [ATS, Taejon (Korea, Republic of); Cho, B. G.; Lee, S. H. [HGREC, Taejon (Korea, Republic of)

    1999-06-15

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  14. Adverse drug reaction prediction using scores produced by large-scale drug-protein target docking on high-performance computing machines.

    Science.gov (United States)

    LaBute, Montiago X; Zhang, Xiaohua; Lenderman, Jason; Bennion, Brian J; Wong, Sergio E; Lightstone, Felice C

    2014-01-01

    Late-stage or post-market identification of adverse drug reactions (ADRs) is a significant public health issue and a source of major economic liability for drug development. Thus, reliable in silico screening of drug candidates for possible ADRs would be advantageous. In this work, we introduce a computational approach that predicts ADRs by combining the results of molecular docking and leverages known ADR information from DrugBank and SIDER. We employed a recently parallelized version of AutoDock Vina (VinaLC) to dock 906 small molecule drugs to a virtual panel of 409 DrugBank protein targets. L1-regularized logistic regression models were trained on the resulting docking scores of a 560 compound subset from the initial 906 compounds to predict 85 side effects, grouped into 10 ADR phenotype groups. Only 21% (87 out of 409) of the drug-protein binding features involve known targets of the drug subset, providing a significant probe of off-target effects. As a control, associations of this drug subset with the 555 annotated targets of these compounds, as reported in DrugBank, were used as features to train a separate group of models. The Vina off-target models and the DrugBank on-target models yielded comparable median area-under-the-receiver-operating-characteristic-curves (AUCs) during 10-fold cross-validation (0.60-0.69 and 0.61-0.74, respectively). Evidence was found in the PubMed literature to support several putative ADR-protein associations identified by our analysis. Among them, several associations between neoplasm-related ADRs and known tumor suppressor and tumor invasiveness marker proteins were found. A dual role for interstitial collagenase in both neoplasms and aneurysm formation was also identified. These associations all involve off-target proteins and could not have been found using available drug/on-target interaction data. This study illustrates a path forward to comprehensive ADR virtual screening that can potentially scale with increasing number

  15. Adverse drug reaction prediction using scores produced by large-scale drug-protein target docking on high-performance computing machines.

    Directory of Open Access Journals (Sweden)

    Montiago X LaBute

    Full Text Available Late-stage or post-market identification of adverse drug reactions (ADRs is a significant public health issue and a source of major economic liability for drug development. Thus, reliable in silico screening of drug candidates for possible ADRs would be advantageous. In this work, we introduce a computational approach that predicts ADRs by combining the results of molecular docking and leverages known ADR information from DrugBank and SIDER. We employed a recently parallelized version of AutoDock Vina (VinaLC to dock 906 small molecule drugs to a virtual panel of 409 DrugBank protein targets. L1-regularized logistic regression models were trained on the resulting docking scores of a 560 compound subset from the initial 906 compounds to predict 85 side effects, grouped into 10 ADR phenotype groups. Only 21% (87 out of 409 of the drug-protein binding features involve known targets of the drug subset, providing a significant probe of off-target effects. As a control, associations of this drug subset with the 555 annotated targets of these compounds, as reported in DrugBank, were used as features to train a separate group of models. The Vina off-target models and the DrugBank on-target models yielded comparable median area-under-the-receiver-operating-characteristic-curves (AUCs during 10-fold cross-validation (0.60-0.69 and 0.61-0.74, respectively. Evidence was found in the PubMed literature to support several putative ADR-protein associations identified by our analysis. Among them, several associations between neoplasm-related ADRs and known tumor suppressor and tumor invasiveness marker proteins were found. A dual role for interstitial collagenase in both neoplasms and aneurysm formation was also identified. These associations all involve off-target proteins and could not have been found using available drug/on-target interaction data. This study illustrates a path forward to comprehensive ADR virtual screening that can potentially scale with

  16. Large scale and performance tests of the ATLAS online software

    International Nuclear Information System (INIS)

    Alexandrov; Kotov, V.; Mineev, M.; Roumiantsev, V.; Wolters, H.; Amorim, A.; Pedro, L.; Ribeiro, A.; Badescu, E.; Caprini, M.; Burckhart-Chromek, D.; Dobson, M.; Jones, R.; Kazarov, A.; Kolos, S.; Liko, D.; Lucio, L.; Mapelli, L.; Nassiakou, M.; Schweiger, D.; Soloviev, I.; Hart, R.; Ryabov, Y.; Moneta, L.

    2001-01-01

    One of the sub-systems of the Trigger/DAQ system of the future ATLAS experiment is the Online Software system. It encompasses the functionality needed to configure, control and monitor the DAQ. Its architecture is based on a component structure described in the ATLAS Trigger/DAQ technical proposal. Regular integration tests ensure its smooth operation in test beam setups during its evolutionary development towards the final ATLAS online system. Feedback is received and returned into the development process. Studies of the system behavior have been performed on a set of up to 111 PCs on a configuration which is getting closer to the final size. Large scale and performance test of the integrated system were performed on this setup with emphasis on investigating the aspects of the inter-dependence of the components and the performance of the communication software. Of particular interest were the run control state transitions in various configurations of the run control hierarchy. For the purpose of the tests, the software from other Trigger/DAQ sub-systems has been emulated. The author presents a brief overview of the online system structure, its components and the large scale integration tests and their results

  17. Multi-Agent System Supporting Automated Large-Scale Photometric Computations

    Directory of Open Access Journals (Sweden)

    Adam Sȩdziwy

    2016-02-01

    Full Text Available The technologies related to green energy, smart cities and similar areas being dynamically developed in recent years, face frequently problems of a computational nature rather than a technological one. The example is the ability of accurately predicting the weather conditions for PV farms or wind turbines. Another group of issues is related to the complexity of the computations required to obtain an optimal setup of a solution being designed. In this article, we present the case representing the latter group of problems, namely designing large-scale power-saving lighting installations. The term “large-scale” refers to an entire city area, containing tens of thousands of luminaires. Although a simple power reduction for a single street, giving limited savings, is relatively easy, it becomes infeasible for tasks covering thousands of luminaires described by precise coordinates (instead of simplified layouts. To overcome this critical issue, we propose introducing a formal representation of a computing problem and applying a multi-agent system to perform design-related computations in parallel. The important measure introduced in the article indicating optimization progress is entropy. It also allows for terminating optimization when the solution is satisfying. The article contains the results of real-life calculations being made with the help of the presented approach.

  18. Safety trends in small-scale coal mines in developing countries with particular reference to China, India and Pakistan

    International Nuclear Information System (INIS)

    Jadoon, K.G.; Akbar, S.; Edwards, J.S.

    2004-01-01

    Small-scale mining for coal is practiced all over the world. But major proportions of these mines are located in developing countries in Asia. China, India and Pakistan are the main producers of coal from small- scale mines. Due to prevailing poor safety conditions in these mines, a large number of workers receive injuries ranging from minor to fatal. Gas explosions/outbursts, roof falls, material handling, etc. are the main causes of majority of accidents occurring in small-scale mines. In China, thousands of workers are killed due to gas explosions/outbursts every year. Lack of financial resources, inadequate education and training of workers, contractual labour systems and lack of commitment to improve safety and health are the reasons that mainly contribute to the poor safety performance in this sector of mining. (author)

  19. Improved uniformity in high-performance organic photovoltaics enabled by (3-aminopropyl)triethoxysilane cathode functionalization.

    Science.gov (United States)

    Luck, Kyle A; Shastry, Tejas A; Loser, Stephen; Ogien, Gabriel; Marks, Tobin J; Hersam, Mark C

    2013-12-28

    Organic photovoltaics have the potential to serve as lightweight, low-cost, mechanically flexible solar cells. However, losses in efficiency as laboratory cells are scaled up to the module level have to date impeded large scale deployment. Here, we report that a 3-aminopropyltriethoxysilane (APTES) cathode interfacial treatment significantly enhances performance reproducibility in inverted high-efficiency PTB7:PC71BM organic photovoltaic cells, as demonstrated by the fabrication of 100 APTES-treated devices versus 100 untreated controls. The APTES-treated devices achieve a power conversion efficiency of 8.08 ± 0.12% with histogram skewness of -0.291, whereas the untreated controls achieve 7.80 ± 0.26% with histogram skewness of -1.86. By substantially suppressing the interfacial origins of underperforming cells, the APTES treatment offers a pathway for fabricating large-area modules with high spatial performance uniformity.

  20. Performance and scaling of a novel locomotor structure: adhesive capacity of climbing gobiid fishes.

    Science.gov (United States)

    Maie, Takashi; Schoenfuss, Heiko L; Blob, Richard W

    2012-11-15

    Many species of gobiid fishes adhere to surfaces using a sucker formed from fusion of the pelvic fins. Juveniles of many amphidromous species use this pelvic sucker to scale waterfalls during migrations to upstream habitats after an oceanic larval phase. However, adults may still use suckers to re-scale waterfalls if displaced. If attachment force is proportional to sucker area and if growth of the sucker is isometric, then increases in the forces that climbing fish must resist might outpace adhesive capacity, causing climbing performance to decline through ontogeny. To test for such trends, we measured pressure differentials and adhesive suction forces generated by the pelvic sucker across wide size ranges in six goby species, including climbing and non-climbing taxa. Suction was achieved via two distinct growth strategies: (1) small suckers with isometric (or negatively allometric) scaling among climbing gobies and (2) large suckers with positively allometric growth in non-climbing gobies. Species using the first strategy show a high baseline of adhesive capacity that may aid climbing performance throughout ontogeny, with pressure differentials and suction forces much greater than expected if adhesion were a passive function of sucker area. In contrast, large suckers possessed by non-climbing species may help compensate for reduced pressure differentials, thereby producing suction sufficient to support body weight. Climbing Sicyopterus species also use oral suckers during climbing waterfalls, and these exhibited scaling patterns similar to those for pelvic suckers. However, oral suction force was considerably lower than that for pelvic suckers, reducing the ability for these fish to attach to substrates by the oral sucker alone.

  1. Religion in High-Performance Athletes: An exploratory study about the dynamic

    Directory of Open Access Journals (Sweden)

    Sónia Morgado

    2017-10-01

    Full Text Available The religious phenomenon is considered a tool to modulate behaviors or cognitions, and therefore influence every aspect of life, including sports. The religion and its effect on sports, especially in High-Performance Athletes are due to be analyzed. The assessment of the athletes it was used the Interiorization Religious Scale (Barros, 2005. The instrument was applied to athletes from High-Performance Sports Centers, in function of gender, age, and religion. The results showed that religion does not spurs the vision of the athletes. Even though the results makes no evidence of the religion importance in sports, it would be useful for coaches, managers, and team leaders, to insert and contextualize the beliefs and religious rituals of the athletes in training process.

  2. Performance evaluation of thin wearing courses through scaled accelerated trafficking.

    Science.gov (United States)

    2014-01-01

    The primary objective of this study was to evaluate the permanent deformation (rutting) and fatigue performance of : several thin asphalt concrete wearing courses using a scaled-down accelerated pavement testing device. The accelerated testing : was ...

  3. Fabrication of highly dispersed ZnO nanoparticles embedded in graphene nanosheets for high performance supercapacitors

    International Nuclear Information System (INIS)

    Fang, Linxia; Zhang, Baoliang; Li, Wei; Zhang, Jizhong; Huang, Kejing; Zhang, Qiuyu

    2014-01-01

    We report a facile strategy to synthesize ZnO-graphene nanocomposites as an advanced electrode material for high-performance supercapacitors. The ZnO-graphene nanocomposites have been fabricated via a facile, low-temperature in situ wet chemistry process. During this process, high dispersed ZnO nanoparticles are embedded in graphene nanosheets, leading to sandwich-structured ZnO-graphene nanocomposites. Thus, intimate interfacial contact between ZnO nanoparticles and graphene nanosheets are achieved, which facilitates electrochemical activity and enhance electrochemical properties due to fast electron transfer. The as-prepared ZnO-graphene nanocomposites exhibit a maximum specific capacitance of 786 F g −1 and excellent cycle life with capacity retention of about 92% after 500 cycles. This facile design and rational synthesis offers an effective strategy to enhance the electrochemical performance of supercapacitors and shows promising potential for large-scale application in energy storage

  4. Solar energy in the context of energy use, energy transportation and energy storage.

    Science.gov (United States)

    MacKay, David J C

    2013-08-13

    Taking the UK as a case study, this paper describes current energy use and a range of sustainable energy options for the future, including solar power and other renewables. I focus on the area involved in collecting, converting and delivering sustainable energy, looking in particular detail at the potential role of solar power. Britain consumes energy at a rate of about 5000 watts per person, and its population density is about 250 people per square kilometre. If we multiply the per capita energy consumption by the population density, then we obtain the average primary energy consumption per unit area, which for the UK is 1.25 watts per square metre. This areal power density is uncomfortably similar to the average power density that could be supplied by many renewables: the gravitational potential energy of rainfall in the Scottish highlands has a raw power per unit area of roughly 0.24 watts per square metre; energy crops in Europe deliver about 0.5 watts per square metre; wind farms deliver roughly 2.5 watts per square metre; solar photovoltaic farms in Bavaria, Germany, and Vermont, USA, deliver 4 watts per square metre; in sunnier locations, solar photovoltaic farms can deliver 10 watts per square metre; concentrating solar power stations in deserts might deliver 20 watts per square metre. In a decarbonized world that is renewable-powered, the land area required to maintain today's British energy consumption would have to be similar to the area of Britain. Several other high-density, high-consuming countries are in the same boat as Britain, and many other countries are rushing to join us. Decarbonizing such countries will only be possible through some combination of the following options: the embracing of country-sized renewable power-generation facilities; large-scale energy imports from country-sized renewable facilities in other countries; population reduction; radical efficiency improvements and lifestyle changes; and the growth of non-renewable low

  5. A deposition record of inorganic ions from a high-alpine glacier

    Energy Technology Data Exchange (ETDEWEB)

    Huber, T. [Bern Univ. (Switzerland); Bruetsch, S.; Gaeggeler, H.W.; Schotterer, U.; Schwikowski, M. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1997-09-01

    The lowest five metres of an ice core from a high-alpine glacier (Colle Gnifetti, Monte Rosa massif, 4450m a.s.l., Switzerland) were analysed for ammonium, calcium, chloride, magnesium, nitrate, potassium, sodium, and sulphate by ion chromatography. (author) 1 fig., 3 refs.

  6. High Performance Processing and Analysis of Geospatial Data Using CUDA on GPU

    Directory of Open Access Journals (Sweden)

    STOJANOVIC, N.

    2014-11-01

    Full Text Available In this paper, the high-performance processing of massive geospatial data on many-core GPU (Graphic Processing Unit is presented. We use CUDA (Compute Unified Device Architecture programming framework to implement parallel processing of common Geographic Information Systems (GIS algorithms, such as viewshed analysis and map-matching. Experimental evaluation indicates the improvement in performance with respect to CPU-based solutions and shows feasibility of using GPU and CUDA for parallel implementation of GIS algorithms over large-scale geospatial datasets.

  7. Scaling and design analyses of a scaled-down, high-temperature test facility for experimental investigation of the initial stages of a VHTR air-ingress accident

    International Nuclear Information System (INIS)

    Arcilesi, David J.; Ham, Tae Kyu; Kim, In Hun; Sun, Xiaodong; Christensen, Richard N.; Oh, Chang H.

    2015-01-01

    Highlights: • A 1/8th geometric-scale test facility that models the VHTR hot plenum is proposed. • Geometric scaling analysis is introduced for VHTR to analyze air-ingress accident. • Design calculations are performed to show that accident phenomenology is preserved. • Some analyses include time scale, hydraulic similarity and power scaling analysis. • Test facility has been constructed and shake-down tests are currently being carried out. - Abstract: A critical event in the safety analysis of the very high-temperature gas-cooled reactor (VHTR) is an air-ingress accident. This accident is initiated, in its worst case scenario, by a double-ended guillotine break of the coaxial cross vessel, which leads to a rapid reactor vessel depressurization. In a VHTR, the reactor vessel is located within a reactor cavity that is filled with air during normal operating conditions. Following the vessel depressurization, the dominant mode of ingress of an air–helium mixture into the reactor vessel will either be molecular diffusion or density-driven stratified flow. The mode of ingress is hypothesized to depend largely on the break conditions of the cross vessel. Since the time scales of these two ingress phenomena differ by orders of magnitude, it is imperative to understand under which conditions each of these mechanisms will dominate in the air ingress process. Computer models have been developed to analyze this type of accident scenario. There are, however, limited experimental data available to understand the phenomenology of the air-ingress accident and to validate these models. Therefore, there is a need to design and construct a scaled-down experimental test facility to simulate the air-ingress accident scenarios and to collect experimental data. The current paper focuses on the analyses performed for the design and operation of a 1/8th geometric scale (by height and diameter), high-temperature test facility. A geometric scaling analysis for the VHTR, a time

  8. Protective design of critical infrastructure with high performance concretes

    International Nuclear Information System (INIS)

    Riedel, W.; Nöldgen, M.; Stolz, A.; Roller, C.

    2012-01-01

    Conclusions: High performance concrete constructions will allow innovative design solutions for critical infrastructures. Validation of engineering methods can reside on large and model scale experiments conducted on conventional concrete structures. New consistent impact experiments show extreme protection potential for UHPC. Modern FEM with concrete models and explicit rebar can model HPC and UHPC penetration resistance. SDOF and TDOF approaches are valuable design tools on local and global level. Combination of at least 2 out of 3 design methods FEM – XDOF- EXP allow reliable prediction and efficient innovative designs

  9. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    Science.gov (United States)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill

    2000-01-01

    We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3

  10. High performance light water reactor

    International Nuclear Information System (INIS)

    Squarer, D.; Schulenberg, T.; Struwe, D.; Oka, Y.; Bittermann, D.; Aksan, N.; Maraczy, C.; Kyrki-Rajamaeki, R.; Souyri, A.; Dumaz, P.

    2003-01-01

    The objective of the high performance light water reactor (HPLWR) project is to assess the merit and economic feasibility of a high efficiency LWR operating at thermodynamically supercritical regime. An efficiency of approximately 44% is expected. To accomplish this objective, a highly qualified team of European research institutes and industrial partners together with the University of Tokyo is assessing the major issues pertaining to a new reactor concept, under the co-sponsorship of the European Commission. The assessment has emphasized the recent advancement achieved in this area by Japan. Additionally, it accounts for advanced European reactor design requirements, recent improvements, practical design aspects, availability of plant components and the availability of high temperature materials. The final objective of this project is to reach a conclusion on the potential of the HPLWR to help sustain the nuclear option, by supplying competitively priced electricity, as well as to continue the nuclear competence in LWR technology. The following is a brief summary of the main project achievements:-A state-of-the-art review of supercritical water-cooled reactors has been performed for the HPLWR project.-Extensive studies have been performed in the last 10 years by the University of Tokyo. Therefore, a 'reference design', developed by the University of Tokyo, was selected in order to assess the available technological tools (i.e. computer codes, analyses, advanced materials, water chemistry, etc.). Design data and results of the analysis were supplied by the University of Tokyo. A benchmark problem, based on the 'reference design' was defined for neutronics calculations and several partners of the HPLWR project carried out independent analyses. The results of these analyses, which in addition help to 'calibrate' the codes, have guided the assessment of the core and the design of an improved HPLWR fuel assembly. Preliminary selection was made for the HPLWR scale

  11. Performance assessment of laboratory and field-scale multi-step passive treatment of iron-rich acid mine drainage for design improvement.

    Science.gov (United States)

    Rakotonimaro, Tsiverihasina V; Neculita, Carmen Mihaela; Bussière, Bruno; Genty, Thomas; Zagury, Gérald J

    2018-04-17

    Multi-step passive systems for the treatment of iron-rich acid mine drainage (Fe-rich AMD) perform satisfactorily at the laboratory scale. However, their field-scale application has revealed dissimilarities in performance, particularly with respect to hydraulic parameters. In this study, the assessment of factors potentially responsible for the variations in performance of laboratory and field-scale multi-step systems was undertaken. Three laboratory multi-step treatment scenarios, involving a combination of dispersed alkaline substrate (DAS) units, anoxic dolomitic drains, and passive biochemical reactors (PBRs), were set up in 10.7-L columns. The field-scale treatment consisted of two PBRs separated by a wood ash (WA) reactor. The parameters identified as possibly influencing the performances of the laboratory and field-scale experiments were the following: AMD chemistry (electrical conductivity and Fe and SO 4 2- concentrations), flow rate (Q), and saturated hydraulic conductivity (k sat ). Based on these findings, the design of an efficient passive multi-step treatment system is suggested to consider the following: (1) Fe pretreatment, using materials with high k sat and low HRT. If a PBR is to be used, the Fe load should be PBR/DAS filled with a mixture with at least 20% of neutralizing agent; (3) include Q and k sat (> 10 -3  cm/s) in the long-term prediction. Finally, mesocosm testing is strongly recommended prior to construction of full-scale systems for the treatment of Fe-rich AMD.

  12. Denitrification of high strength nitrate waste from a nuclear industry using acclimatized biomass in a pilot scale reactor.

    Science.gov (United States)

    Dhamole, Pradip B; Nair, Rashmi R; D'Souza, Stanislaus F; Pandit, Aniruddha B; Lele, S S

    2015-01-01

    This work investigates the performance of acclimatized biomass for denitrification of high strength nitrate waste (10,000 mg/L NO3) from a nuclear industry in a continuous laboratory scale (32 L) and pilot scale reactor (330 L) operated over a period of 4 and 5 months, respectively. Effect of substrate fluctuations (mainly C/NO3-N) on denitrification was studied in a laboratory scale reactor. Incomplete denitrification (95-96 %) was observed at low C/NO3-N (≤2), whereas at high C/NO3-N (≥2.25) led to ammonia formation. Ammonia production increased from 1 to 9 % with an increase in C/NO3-N from 2.25 to 6. Complete denitrification and no ammonia formation were observed at an optimum C/NO3-N of 2.0. Microbiological studies showed decrease in denitrifiers and increase in nitrite-oxidizing bacteria and ammonia-oxidizing bacteria at high C/NO3-N (≥2.25). Pilot scale studies were carried out with optimum C/NO3-N, and sustainability of the process was checked on the pilot scale for 5 months.

  13. Local Electric Field Facilitates High-Performance Li-Ion Batteries.

    Science.gov (United States)

    Liu, Youwen; Zhou, Tengfei; Zheng, Yang; He, Zhihai; Xiao, Chong; Pang, Wei Kong; Tong, Wei; Zou, Youming; Pan, Bicai; Guo, Zaiping; Xie, Yi

    2017-08-22

    By scrutinizing the energy storage process in Li-ion batteries, tuning Li-ion migration behavior by atomic level tailoring will unlock great potential for pursuing higher electrochemical performance. Vacancy, which can effectively modulate the electrical ordering on the nanoscale, even in tiny concentrations, will provide tempting opportunities for manipulating Li-ion migratory behavior. Herein, taking CuGeO 3 as a model, oxygen vacancies obtained by reducing the thickness dimension down to the atomic scale are introduced in this work. As the Li-ion storage progresses, the imbalanced charge distribution emerging around the oxygen vacancies could induce a local built-in electric field, which will accelerate the ions' migration rate by Coulomb forces and thus have benefits for high-rate performance. Furthermore, the thus-obtained CuGeO 3 ultrathin nanosheets (CGOUNs)/graphene van der Waals heterojunctions are used as anodes in Li-ion batteries, which deliver a reversible specific capacity of 1295 mAh g -1 at 100 mA g -1 , with improved rate capability and cycling performance compared to their bulk counterpart. Our findings build a clear connection between the atomic/defect/electronic structure and intrinsic properties for designing high-efficiency electrode materials.

  14. La ciudad central de la Ciudad de México: ¿espacio de oportunidad laboral para la metrópoli?

    Directory of Open Access Journals (Sweden)

    Clara Eugenia Salazar

    2010-01-01

    Full Text Available Las áreas centrales de las grandes metrópolis han perdido población y transformado sus actividades económicas como respuesta a la redistribución intrametropolitana de la población y los cambios en la demanda ocupacional. ¿Significa que los centros de las ciudades han perdido su centralidad? El concepto de centralidad puede ser abordado desde diferentes perspectivas, pero todas ellas enfatizan la concentración espacial de funciones urbanas y actividades económicas. En este documento se analiza la evolución de la demanda ocupacional en la Zona Metropolitana de la Ciudad de México entre 1980 y 2003, y en particular en su ciudad central. El periodo de estudio se inscribe en un contexto nacional y local de reestructuración económica, e interesa conocer el papel de la ciudad central en el crecimiento económico y en la generación de empleo metropolitano, así como en su transformación económica y ocupacional.

  15. Affordable and accurate large-scale hybrid-functional calculations on GPU-accelerated supercomputers

    Science.gov (United States)

    Ratcliff, Laura E.; Degomme, A.; Flores-Livas, José A.; Goedecker, Stefan; Genovese, Luigi

    2018-03-01

    Performing high accuracy hybrid functional calculations for condensed matter systems containing a large number of atoms is at present computationally very demanding or even out of reach if high quality basis sets are used. We present a highly optimized multiple graphics processing unit implementation of the exact exchange operator which allows one to perform fast hybrid functional density-functional theory (DFT) calculations with systematic basis sets without additional approximations for up to a thousand atoms. With this method hybrid DFT calculations of high quality become accessible on state-of-the-art supercomputers within a time-to-solution that is of the same order of magnitude as traditional semilocal-GGA functionals. The method is implemented in a portable open-source library.

  16. A performance model for the communication in fast multipole methods on high-performance computing platforms

    KAUST Repository

    Ibeid, Huda

    2016-03-04

    Exascale systems are predicted to have approximately 1 billion cores, assuming gigahertz cores. Limitations on affordable network topologies for distributed memory systems of such massive scale bring new challenges to the currently dominant parallel programing model. Currently, there are many efforts to evaluate the hardware and software bottlenecks of exascale designs. It is therefore of interest to model application performance and to understand what changes need to be made to ensure extrapolated scalability. The fast multipole method (FMM) was originally developed for accelerating N-body problems in astrophysics and molecular dynamics but has recently been extended to a wider range of problems. Its high arithmetic intensity combined with its linear complexity and asynchronous communication patterns make it a promising algorithm for exascale systems. In this paper, we discuss the challenges for FMM on current parallel computers and future exascale architectures, with a focus on internode communication. We focus on the communication part only; the efficiency of the computational kernels are beyond the scope of the present study. We develop a performance model that considers the communication patterns of the FMM and observe a good match between our model and the actual communication time on four high-performance computing (HPC) systems, when latency, bandwidth, network topology, and multicore penalties are all taken into account. To our knowledge, this is the first formal characterization of internode communication in FMM that validates the model against actual measurements of communication time. The ultimate communication model is predictive in an absolute sense; however, on complex systems, this objective is often out of reach or of a difficulty out of proportion to its benefit when there exists a simpler model that is inexpensive and sufficient to guide coding decisions leading to improved scaling. The current model provides such guidance.

  17. High performance ultraviolet photodetectors based on ZnO nanoflakes/PVK heterojunction

    Energy Technology Data Exchange (ETDEWEB)

    Cai, Yuhua; Xiang, Jinzhong, E-mail: jzhxiang@ynu.edu.cn [School of Physical and Astronomy, Yunnan University, Kunming 650091 (China); Tang, Libin, E-mail: scitang@163.com; Ji, Rongbin, E-mail: jirongbin@gmail.com; Zhao, Jun; Kong, Jincheng [Kunming Institute of Physics, Kunming 650223 (China); Lai, Sin Ki; Lau, Shu Ping [Department of Applied Physics, The Hong Kong Polytechnic University, Hung Hom, Kowloon (Hong Kong); Zhang, Kai [Suzhou Institute of Nano-Tech and Nano-Bionics (SINANO), Chinese Academy of Science, Suzhou 215123 (China)

    2016-08-15

    A high performance ultraviolet (UV) photodetector is receiving increasing attention due to its significant applications in fire warning, environmental monitoring, scientific research, astronomical observation, etc. The enhancement in performance of the UV photodetector has been impeded by lacking of a high-efficiency heterojunction in which UV photons can efficiently convert into charges. In this work, the high performance UV photodetectors have been realized by utilizing organic/inorganic heterojunctions based on a ZnO nanoflakes/poly (N-vinylcarbazole) hybrid. A transparent conducting polymer poly(3,4-ethylene-dioxythiophene):poly(styrenesulfonate)-coated quartz substrate is employed as the anode in replacement of the commonly ITO-coated glass in order to harvest shorter UV light. The devices show a lower dark current density, with a high responsivity (R) of 7.27 × 10{sup 3 }A/W and a specific detectivity (D*) of 6.20 × 10{sup 13} cm Hz{sup 1/2}/W{sup −1} at 2 V bias voltage in ambient environment (1.30 mW/cm{sup 2} at λ = 365 nm), resulting in the enhancements in R and D* by 49% and one order of magnitude, respectively. The study sheds light on developing high-performance, large scale-array, flexible UV detectors using the solution processable method.

  18. Enabling High Performance Large Scale Dense Problems through KBLAS

    KAUST Repository

    Abdelfattah, Ahmad; Keyes, David E.; Ltaief, Hatem

    2014-01-01

    KBLAS (KAUST BLAS) is a small library that provides highly optimized BLAS routines on systems accelerated with GPUs. KBLAS is entirely written in CUDA C, and targets NVIDIA GPUs with compute capability 2.0 (Fermi) or higher. The current focus

  19. Sadako and the Thousand Paper Cranes: The Dialogic Narrative in the Educational Act

    Science.gov (United States)

    Al-Jafar, Ali A.

    2016-01-01

    This study used the story of "Sadako and the thousand paper cranes" by Coerr (1977) to discover similarities between the events of August 1945 in Hiroshima and the events of August 1990 in Kuwait. The participants in a children's literature class at Kuwait University folded paper cranes and wrote in their journals to answer two…

  20. Plasma performance and scaling laws in the RFX-mod reversed-field pinch experiment

    International Nuclear Information System (INIS)

    Innocente, P.; Alfier, A.; Canton, A.; Pasqualotto, R.

    2009-01-01

    The large range of plasma currents (I p = 0.2-1.6 MA) and feedback-controlled magnetic boundary conditions of the RFX-mod experiment make it well suited to performing scaling studies. The assessment of such scaling, in particular those on temperature and energy confinement, is crucial both for improving the operating reversed-field pinch (RFP) devices and for validating the RFP configuration as a candidate for the future fusion reactors. For such a purpose scaling laws for magnetic fluctuations, temperature and energy confinement have been evaluated in stationary operation. RFX-mod scaling laws have been compared with those obtained from other RFP devices and numerical simulations. The role of the magnetic boundary has been analysed, comparing discharges performed with different active control schemes of the edge radial magnetic field.

  1. Analysis of Non-Volatile Chemical Constituents of Menthae Haplocalycis Herba by Ultra-High Performance Liquid Chromatography-High Resolution Mass Spectrometry

    Directory of Open Access Journals (Sweden)

    Lu-Lu Xu

    2017-10-01

    Full Text Available Menthae Haplocalycis herba, one kind of Chinese edible herbs, has been widely utilized for the clinical use in China for thousands of years. Over the last decades, studies on chemical constituents of Menthae Haplocalycis herba have been widely performed. However, less attention has been paid to non-volatile components which are also responsible for its medical efficacy than the volatile constituents. Therefore, a rapid and sensitive method was developed for the comprehensive identification of the non-volatile constituents in Menthae Haplocalycis herba using ultra-high performance liquid chromatography coupled with linear ion trap-Orbitrap mass spectrometry (UHPLC-LTQ-Orbitrap. Separation was performed with Acquity UPLC® BEH C18 column (2.1 mm × 100 mm, 1.7 μm with 0.2% formic acid aqueous solution and acetonitrile as the mobile phase under gradient conditions. Based on the accurate mass measurement (<5 ppm, MS/MS fragmentation patterns and different chromatographic behaviors, a total of 64 compounds were unambiguously or tentatively characterized, including 30 flavonoids, 20 phenolic acids, 12 terpenoids and two phenylpropanoids. Finally, target isolation of three compounds named Acacetin, Rosmarinic acid and Clemastanin A (first isolated from Menthae Haplocalycis herba were performed based on the obtained results, which further confirmed the deduction of fragmentation patterns and identified the compounds profile in Menthae Haplocalycis herba. Our research firstly systematically elucidated the non-volatile components of Menthae Haplocalycis herba, which laid the foundation for further pharmacological and metabolic studies. Meanwhile, our established method was useful and efficient to screen and identify targeted constituents from traditional Chinese medicine extracts.

  2. Scalable High Performance Message Passing over InfiniBand for Open MPI

    Energy Technology Data Exchange (ETDEWEB)

    Friedley, A; Hoefler, T; Leininger, M L; Lumsdaine, A

    2007-10-24

    InfiniBand (IB) is a popular network technology for modern high-performance computing systems. MPI implementations traditionally support IB using a reliable, connection-oriented (RC) transport. However, per-process resource usage that grows linearly with the number of processes, makes this approach prohibitive for large-scale systems. IB provides an alternative in the form of a connectionless unreliable datagram transport (UD), which allows for near-constant resource usage and initialization overhead as the process count increases. This paper describes a UD-based implementation for IB in Open MPI as a scalable alternative to existing RC-based schemes. We use the software reliability capabilities of Open MPI to provide the guaranteed delivery semantics required by MPI. Results show that UD not only requires fewer resources at scale, but also allows for shorter MPI startup times. A connectionless model also improves performance for applications that tend to send small messages to many different processes.

  3. Performance Prediction for Large-Scale Nuclear Waste Repositories: Final Report

    International Nuclear Information System (INIS)

    Glassley, W E; Nitao, J J; Grant, W; Boulos, T N; Gokoffski, M O; Johnson, J W; Kercher, J R; Levatin, J A; Steefel, C I

    2001-01-01

    The goal of this project was development of a software package capable of utilizing terascale computational platforms for solving subsurface flow and transport problems important for disposal of high level nuclear waste materials, as well as for DOE-complex clean-up and stewardship efforts. We sought to develop a tool that would diminish reliance on abstracted models, and realistically represent the coupling between subsurface fluid flow, thermal effects and chemical reactions that both modify the physical framework of the rock materials and which change the rock mineralogy and chemistry of the migrating fluid. Providing such a capability would enhance realism in models and increase confidence in long-term predictions of performance. Achieving this goal also allows more cost-effective design and execution of monitoring programs needed to evaluate model results. This goal was successfully accomplished through the development of a new simulation tool (NUFT-C). This capability allows high resolution modeling of complex coupled thermal-hydrological-geochemical processes in the saturated and unsaturated zones of the Earth's crust. The code allows consideration of virtually an unlimited number of chemical species and minerals in a multi-phase, non-isothermal environment. Because the code is constructed to utilize the computational power of the tera-scale IBM ASCI computers, simulations that encompass large rock volumes and complex chemical systems can now be done without sacrificing spatial or temporal resolution. The code is capable of doing one-, two-, and three-dimensional simulations, allowing unprecedented evaluation of the evolution of rock properties and mineralogical and chemical change as a function of time. The code has been validated by comparing results of simulations to laboratory-scale experiments, other benchmark codes, field scale experiments, and observations in natural systems. The results of these exercises demonstrate that the physics and chemistry

  4. Evaluation of high-performance network technologies for ITER

    International Nuclear Information System (INIS)

    Zagar, K.; Hunt, S.; Kolaric, P.; Sabjan, R.; Zagar, A.; Dedic, J.

    2010-01-01

    For the fast feedback plasma controllers, ITER's Control, Data Access and Communication system (CODAC) will need to provide a mechanism for hard real-time communication between its distributed nodes. In particular, the ITER CODAC team identified four types of high-performance communication applications. Synchronous Databus Network (SDN) is to provide an ability to distribute parameters of plasma (estimated to about 5000 double-valued signals) across the system to allow for 1 ms control cycles. Event Distribution Network (EDN) and Time Communication Network (TCN) are to allow synchronization of node I/O operations to 10 ns. Finally, the Audio-Video Network (AVN) is to provide sufficient bandwidth for streaming of surveillance and diagnostics video at a high resolution (1024 x 1024) and frame rate (30 Hz). In this article, we present some combinations of common-off-the-shelf (COTS) technologies that allow the above requirements to be met. Also, we present the performances achieved in a practical (though small scale) technology demonstrator, which involved a real-time Linux operating running on National Instruments' PXI platform, UDP communication implemented directly atop the Ethernet network adapter, CISCO switches, Micro Research Finland's timing and event solution, and GigE audio-video streaming.

  5. High performance computing environment for multidimensional image analysis.

    Science.gov (United States)

    Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo

    2007-07-10

    The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478x speedup. Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets.

  6. Academic self-efficacy for high school scale: search for psychometrics evidence

    Directory of Open Access Journals (Sweden)

    Soely Polydoro

    2015-05-01

    Full Text Available This article aims to present the adaptation and the search for psychometrics evidence of an academic self-efficacy scale. High school students (N = 453 participated of the research (mean age 15.93; SD 1.2. The Academic Self-efficacy Scale for High School is an adapted scale composed of 16 items and organized into three factors: self-efficacy for learning, self-efficacy to act in school life, and self-efficacy for the career decision. Through exploratory factor analysis, a KMO = 0.90 was verified, and 56.57% of the variance was explained. The internal consistency was 0.88. The scale demonstrated good conditions to identify academic self-efficacy of high school students.

  7. High performance nano-composite technology development

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D. [KAERI, Taejon (Korea, Republic of); Kim, E. K.; Jung, S. Y.; Ryu, H. J. [KRICT, Taejon (Korea, Republic of); Hwang, S. S.; Kim, J. K.; Hong, S. M. [KIST, Taejon (Korea, Republic of); Chea, Y. B. [KIGAM, Taejon (Korea, Republic of); Choi, C. H.; Kim, S. D. [ATS, Taejon (Korea, Republic of); Cho, B. G.; Lee, S. H. [HGREC, Taejon (Korea, Republic of)

    1999-06-15

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  8. Physics of integrated high-performance NSTX plasmas

    International Nuclear Information System (INIS)

    Menard, J. E.; Bell, M. G.; Bell, R. E.; Fredrickson, E. D.; Gates, D. A.; Heidbrink, W.; Kaita, R.; Kaye, S. M.; Kessel, C. E.; Kugel, H.; LeBlanc, B. P.; Lee, K. C.; Levinton, F. M.; Maingi, R.; Medley, S. S.; Mikkelsen, D. R.; Mueller, D.; Nishino, N.; Ono, M.; Park, H.; Park, W.; Paul, S. F.; Peebles, T.; Peng, M.; Raman, R.; Redi, M.; Roquemore, L.; Sabbagh, S. A.; Skiner, C. H.; Sontag, A.; Soukhanovskii, V.; Stratton, B.; Stutman, D.; Synakowski, E.; Takase, Y.; Taylor, G.; Tritz, K.; Wade, M.; Wilson, J. R.; Zhu, W.

    2005-01-01

    An overarching goal of magnetic fusion research is the integration of steady state operation with high fusion power density, high plasma β, good thermal and fast particle confinement, and manageable heat and particle fluxes to reactor internal components. NSTX has made significant progress in integrating and understanding the interplay between these competing elements. Sustained high elongation up to 2.5 and H-mode transitions during the I p ramp-up have increased β p and reduced l i at high current resulting in I p flat-top durations exceeding 0.8s for I p >0.8MA. These shape and profile changes delay the onset of deleterious global MHD activity yielding β N values >4.5 and β T ∼20% maintained for several current diffusion times. Higher ∫ N discharges operating above the non-wall limit are sustained via rotational stabilization of the RWM. H-mode confinement scaling factors relative to H98(y,2) span the range 1±0.4 for B T >4kG and show a stron (Nearly linear) residual scaling with B T . Power balance analysis indicates the electron thermal transport dominates the loss power in beam-heated H m ode discharges, but the core χ e can be significantly reduced through current profile modification consistent with reversed magnetic shear. Small ELM regimes have been obtained in high performance plasmas on NSTX, but the ELM type and associated pedestal energy loss are found to depend sensitively on the boundary elongation, magnetic balance, and edge collisionality. NPA data and TRANSP analysis suggest resonant interactions with mid-radius tearing modes may lead to large fast-ion transport. The associated fast-ion diffusion and/or loss likely impact(s) both the driven current and power deposition profiles from NBI heating. Results from experiments to initiate the plasma without the ohmic solenoid and integrated scenario with the TSC code will also be described. (Author)

  9. Performance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systems

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2011-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore clusters: IBM POWER4, POWER5+ and Blue Gene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore clusters because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyro kinetic Toroidal Code in magnetic fusion to validate our performance model of the hybrid application on these multicore clusters. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore clusters. © 2011 IEEE.

  10. Performance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systems

    KAUST Repository

    Wu, Xingfu

    2011-08-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore clusters: IBM POWER4, POWER5+ and Blue Gene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore clusters because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyro kinetic Toroidal Code in magnetic fusion to validate our performance model of the hybrid application on these multicore clusters. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore clusters. © 2011 IEEE.

  11. Optimizing the Performance of Data Analytics Frameworks

    NARCIS (Netherlands)

    Ghit, B.I.

    2017-01-01

    Data analytics frameworks enable users to process large datasets while hiding the complexity of scaling out their computations on large clusters of thousands of machines. Such frameworks parallelize the computations, distribute the data, and tolerate server failures by deploying their own runtime

  12. Cooling and manipulation of nanoparticles in high vacuum

    Science.gov (United States)

    Millen, J.; Kuhn, S.; Patolsky, F.; Kosloff, A.; Arndt, M.

    2016-09-01

    Optomechanical systems, where the mechanical motion of objects is measured and controlled using light, have a huge range of applications, from the metre-scale mirrors of LIGO which detect gravitational waves, to micron scale superconducting systems that can transduce quantum signals. A fascinating addition to this field are free or levitated optomechanical systems, where the oscillator is not physically tethered. We study a variety of nanoparticles which are launched through vacuum (10-8 mbar) and interact with an optical cavity. The centre of mass motion of a nanoparticle can be cooled by the optical cavity field. It is predicted that the quantum ground state of motion can be reached, leaving the particle free to evolve after release from the light field, thus preparing nanoscale matter for quantum interference experiments.

  13. LHC Computing Grid Project Launches intAction with International Support. A thousand times more computing power by 2006

    CERN Multimedia

    2001-01-01

    The first phase of the LHC Computing Grid project was approved at an extraordinary meeting of the Council on 20 September 2001. CERN is preparing for the unprecedented avalanche of data that will be produced by the Large Hadron Collider experiments. A thousand times more computer power will be needed by 2006! CERN's need for a dramatic advance in computing capacity is urgent. As from 2006, the four giant detectors observing trillions of elementary particle collisions at the LHC will accumulate over ten million Gigabytes of data, equivalent to the contents of about 20 million CD-ROMs, each year of its operation. A thousand times more computing power will be needed than is available to CERN today. The strategy the collabortations have adopted to analyse and store this unprecedented amount of data is the coordinated deployment of Grid technologies at hundreds of institutes which will be able to search out and analyse information from an interconnected worldwide grid of tens of thousands of computers and storag...

  14. Software Toolchain for Large-Scale RE-NFA Construction on FPGA

    Directory of Open Access Journals (Sweden)

    Yi-Hua E. Yang

    2009-01-01

    and O(n×m memory by our software. A large number of RE-NFAs are placed onto a two-dimensional staged pipeline, allowing scalability to thousands of RE-NFAs with linear area increase and little clock rate penalty due to scaling. On a PC with a 2 GHz Athlon64 processor and 2 GB memory, our prototype software constructs hundreds of RE-NFAs used by Snort in less than 10 seconds. We also designed a benchmark generator which can produce RE-NFAs with configurable pattern complexity parameters, including state count, state fan-in, loop-back and feed-forward distances. Several regular expressions with various complexities are used to test the performance of our RE-NFA construction software.

  15. HIGH-PERFORMANCE COATING MATERIALS

    Energy Technology Data Exchange (ETDEWEB)

    SUGAMA,T.

    2007-01-01

    Corrosion, erosion, oxidation, and fouling by scale deposits impose critical issues in selecting the metal components used at geothermal power plants operating at brine temperatures up to 300 C. Replacing these components is very costly and time consuming. Currently, components made of titanium alloy and stainless steel commonly are employed for dealing with these problems. However, another major consideration in using these metals is not only that they are considerably more expensive than carbon steel, but also the susceptibility of corrosion-preventing passive oxide layers that develop on their outermost surface sites to reactions with brine-induced scales, such as silicate, silica, and calcite. Such reactions lead to the formation of strong interfacial bonds between the scales and oxide layers, causing the accumulation of multiple layers of scales, and the impairment of the plant component's function and efficacy; furthermore, a substantial amount of time is entailed in removing them. This cleaning operation essential for reusing the components is one of the factors causing the increase in the plant's maintenance costs. If inexpensive carbon steel components could be coated and lined with cost-effective high-hydrothermal temperature stable, anti-corrosion, -oxidation, and -fouling materials, this would improve the power plant's economic factors by engendering a considerable reduction in capital investment, and a decrease in the costs of operations and maintenance through optimized maintenance schedules.

  16. Effect of DS Concentration on the PRO Performance Using a 5-Inch Scale Cellulose Triacetate-Based Hollow Fiber Membrane Module

    Directory of Open Access Journals (Sweden)

    Masahiro Yasukawa

    2018-05-01

    Full Text Available In this study, pressure-retarded osmosis (PRO performance of a 5-inch scale cellulose triacetate (CTA-based hollow fiber (HF membrane module was evaluated under a wide range of operating conditions (0.0–6.0 MPa of applied pressure, 0.5–2.0 L/min feed solution (FS inlet flow rate, 1.0–6.0 L/min DS inlet flow rate and 0.1–0.9 M draw solution (DS concentration by using a PRO/reverse osmosis (RO hybrid system. The subsequent RO system for DS regeneration enabled the evaluation of the steady-stated module performance. In the case of pilot-scale module operation, since the DS dilution and the feed solution (FS up-concentration had occurred and was not negligible, unlike the lab-scale experiment, PRO performance strongly depended on operating conditions such as inlet flow rates of both the DS and FS concentration. To compare the module performance with different configurations, we proposed a converted parameter in which a difference of the packing density between the spiral wound (SW and the HF module was fairly considered. In the case of HF configuration, because of high packing density, volumetric-based performance was higher than that of SW module, that is, the required number of the module would be less than that of SW module in a full-scale PRO plant.

  17. Mechanical stability of bentonite buffer system for high level nuclear waste

    Energy Technology Data Exchange (ETDEWEB)

    Lempinen, A. [Helsinki Univ. of Technology, Espoo (Finland). Lab. of Theoretical and Applied Mechanics

    1998-05-01

    According to present plans, high level nuclear waste in Finland is going to be disposed of in bedrock at a depth of several hundred metres. The spent fuel containers will be placed in boreholes drilled in the floors of deposition tunnels with engineered clay buffer, which is made of bentonite blocks. The tunnels will be filled with a mixture of bentonite and crushed rock. For stability calculations a thermomechanical model for compressed bentonite is needed. In the study a thermomechanically consistent model for reversible processes for swelling clays is presented. Preliminary calculations were performed and they show that uncertainty in material parameter values causes significantly different results. Therefore, measurements that are consistent with the model are needed 12 refs.

  18. Mechanical stability of bentonite buffer system for high level nuclear waste

    International Nuclear Information System (INIS)

    Lempinen, A.

    1998-05-01

    According to present plans, high level nuclear waste in Finland is going to be disposed of in bedrock at a depth of several hundred metres. The spent fuel containers will be placed in boreholes drilled in the floors of deposition tunnels with engineered clay buffer, which is made of bentonite blocks. The tunnels will be filled with a mixture of bentonite and crushed rock. For stability calculations a thermomechanical model for compressed bentonite is needed. In the study a thermomechanically consistent model for reversible processes for swelling clays is presented. Preliminary calculations were performed and they show that uncertainty in material parameter values causes significantly different results. Therefore, measurements that are consistent with the model are needed

  19. Introduction to the Special Issue: Across the horizon: scale effects in global change research.

    Science.gov (United States)

    Gornish, Elise S; Leuzinger, Sebastian

    2015-01-01

    As a result of the increasing speed and magnitude in which habitats worldwide are experiencing environmental change, making accurate predictions of the effects of global change on ecosystems and the organisms that inhabit them have become an important goal for ecologists. Experimental and modelling approaches aimed at understanding the linkages between factors of global change and biotic responses have become numerous and increasingly complex in order to adequately capture the multifarious dynamics associated with these relationships. However, constrained by resources, experiments are often conducted at small spatiotemporal scales (e.g. looking at a plot of a few square metres over a few years) and at low organizational levels (looking at organisms rather than ecosystems) in spite of both theoretical and experimental work that suggests ecological dynamics across scales can be dissimilar. This phenomenon has been hypothesized to occur because the mechanisms that drive dynamics across scales differ. A good example is the effect of elevated CO2 on transpiration. While at the leaf level, transpiration can be reduced, at the stand level, transpiration can increase because leaf area per unit ground area increases. The reported net effect is then highly dependent on the spatiotemporal scale. This special issue considers the biological relevancy inherent in the patterns associated with the magnitude and type of response to changing environmental conditions, across scales. This collection of papers attempts to provide a comprehensive treatment of this phenomenon in order to help develop an understanding of the extent of, and mechanisms involved with, ecological response to global change. Published by Oxford University Press on behalf of the Annals of Botany Company.

  20. Developing occupational chronologies for surface archaeological deposits from heat retainer hearths on Pine Point and Langwell stations, Far Western New South Wales, Australia

    International Nuclear Information System (INIS)

    Shiner, J.

    2003-01-01

    The archaeological record of arid Australia is dominated by deflated distributions of stone artefacts and heat retainer hearths covering many thousands of square metres. These deposits have often been over-looked by archaeologists in preference for stratified deposits, which are regarded as more appropriate for investigating temporal issues. In recent years this situation had slowly begun to change with the large-scale dating of heat retainer hearths from surface contexts. The work of of Fanning and Holdaway (2001) and Holdaway et al. (2002) in Far Western New South Wales has demonstrated that through the dating of large numbers of hearths it is possible to develop occupational chronologies for surface deposits. At a wider landscape scale these chronologies reflect the timing and tempo of the occupation of different places. A major component of my doctoral fieldwork on Pine Point and Langwell stations, 50 km south of Broken Hill in Western New South Wales, aimed to establish occupational chronologies from hearths for surface archaeological distributions. This paper reports on radiocarbon results from this investigation. (author). 6 refs., 2 figs., 1 tab