WorldWideScience

Sample records for canada involving large-scale

  1. Sustainability of an energy conversion system in Canada involving large-scale integrated hydrogen production using solid fuels

    Directory of Open Access Journals (Sweden)

    Nirmal V. Gnanapragasam, Bale V. Reddy, Marc A. Rosen

    2011-01-01

    Full Text Available The sustainability of a large-scale hydrogen production system is assessed qualitatively. The system uses solid fuels and aims to increase the sustainability of the energy system in Canada through the use of alternative energy forms. The system involves significant technology integration, with various energy conversion processes (e.g., gasification, chemical looping combustion, anaerobic digestion, combustion power cycles-electrolysis and solar-thermal convertors interconnected to increase the utilization of solid fuels as much as feasible in a sustainable manner within cost, environmental and other constraints. The qualitative analysis involves ten different indicators for each of the three dimensions of sustainability: ecology, sociology and technology, applied to each process in the system and assessed based on a ten-point quality scale. The results indicate that biomasses have better sustainability than coals while newer secondary processes are essential for primary conversion to be sustainable, especially when using coals. Also, new developments in CO2 use (for algae-to-oil and commercial applications and storage will in time help improve sustainability.

  2. An Analysis of Large-Scale Writing Assessments in Canada (Grades 5-8)

    Science.gov (United States)

    Peterson, Shelley Stagg; McClay, Jill; Main, Kristin

    2011-01-01

    This paper reports on an analysis of large-scale assessments of Grades 5-8 students' writing across 10 provinces and 2 territories in Canada. Theory, classroom practice, and the contributions and constraints of large-scale writing assessment are brought together with a focus on Grades 5-8 writing in order to provide both a broad view of…

  3. Evidence for large-scale effects of competition: niche displacement in Canada lynx and bobcat.

    Science.gov (United States)

    Peers, Michael J L; Thornton, Daniel H; Murray, Dennis L

    2013-12-22

    Determining the patterns, causes and consequences of character displacement is central to our understanding of competition in ecological communities. However, the majority of competition research has occurred over small spatial extents or focused on fine-scale differences in morphology or behaviour. The effects of competition on broad-scale distribution and niche characteristics of species remain poorly understood but critically important. Using range-wide species distribution models, we evaluated whether Canada lynx (Lynx canadensis) or bobcat (Lynx rufus) were displaced in regions of sympatry. Consistent with our prediction, we found that lynx niches were less similar to those of bobcat in areas of sympatry versus allopatry, with a stronger reliance on snow cover driving lynx niche divergence in the sympatric zone. By contrast, bobcat increased niche breadth in zones of sympatry, and bobcat niches were equally similar to those of lynx in zones of sympatry and allopatry. These findings suggest that competitively disadvantaged species avoid competition at large scales by restricting their niche to highly suitable conditions, while superior competitors expand the diversity of environments used. Our results indicate that competition can manifest within climatic niche space across species' ranges, highlighting the importance of biotic interactions occurring at large spatial scales on niche dynamics.

  4. Non-stationary analysis of the frequency and intensity of heavy precipitation over Canada and their relations to large-scale climate patterns

    Science.gov (United States)

    Tan, Xuezhi; Gan, Thian Yew

    2016-06-01

    In recent years, because the frequency and severity of floods have increased across Canada, it is important to understand the characteristics of Canadian heavy precipitation. Long-term precipitation data of 463 gauging stations of Canada were analyzed using non-stationary generalized extreme value distribution (GEV), Poisson distribution and generalized Pareto (GP) distribution. Time-varying covariates that represent large-scale climate patterns such as El Niño Southern Oscillation (ENSO), North Atlantic Oscillation (NAO), Pacific decadal oscillation (PDO) and North Pacific Oscillation (NP) were incorporated to parameters of GEV, Poisson and GP distributions. Results show that GEV distributions tend to under-estimate annual maximum daily precipitation (AMP) of western and eastern coastal regions of Canada, compared to GP distributions. Poisson regressions show that temporal clusters of heavy precipitation events in Canada are related to large-scale climate patterns. By modeling AMP time series with non-stationary GEV and heavy precipitation with non-stationary GP distributions, it is evident that AMP and heavy precipitation of Canada show strong non-stationarities (abrupt and slowly varying changes) likely because of the influence of large-scale climate patterns. AMP in southwestern coastal regions, southern Canadian Prairies and the Great Lakes tend to be higher in El Niño than in La Niña years, while AMP of other regions of Canada tends to be lower in El Niño than in La Niña years. The influence of ENSO on heavy precipitation was spatially consistent but stronger than on AMP. The effect of PDO, NAO and NP on extreme precipitation is also statistically significant at some stations across Canada.

  5. Large-Scale Variations in Lumber Value Recovery of Yellow Birch and Sugar Maple in Quebec, Canada.

    Directory of Open Access Journals (Sweden)

    Mariana Hassegawa

    Full Text Available Silvicultural restoration measures have been implemented in the northern hardwoods forests of southern Quebec, Canada, but their financial applicability is often hampered by the depleted state of the resource. To help identify sites most suited for the production of high quality timber, where the potential return on silvicultural investments should be the highest, this study assessed the impact of stand and site characteristics on timber quality in sugar maple (Acer saccharum Marsh. and yellow birch (Betula alleghaniensis Britt.. For this purpose, lumber value recovery (LVR, an estimate of the summed value of boards contained in a unit volume of round wood, was used as an indicator of timber quality. Predictions of LVR were made for yellow birch and sugar maple trees contained in a network of more than 22000 temporary sample plots across the Province. Next, stand-level variables were selected and models to predict LVR were built using the boosted regression trees method. Finally, the occurrence of spatial clusters was verified by a hotspot analysis. Results showed that in both species LVR was positively correlated with the stand age and structural diversity index, and negatively correlated with the number of merchantable stems. Yellow birch had higher LVR in areas with shallower soils, whereas sugar maple had higher LVR in regions with deeper soils. The hotspot analysis indicated that clusters of high and low LVR exist across the province for both species. Although it remains uncertain to what extent the variability of LVR may result from variations in past management practices or in inherent site quality, we argue that efforts to produce high quality timber should be prioritized in sites where LVR is predicted to be the highest.

  6. Large-scale analysis of conserved rare codon clusters suggests an involvement in co-translational molecular recognition events

    Science.gov (United States)

    Chartier, Matthieu; Gaudreault, Francis; Najmanovich, Rafael

    2012-01-01

    Motivation: An increasing amount of evidence from experimental and computational analysis suggests that rare codon clusters are functionally important for protein activity. Most of the studies on rare codon clusters were performed on a limited number of proteins or protein families. In the present study, we present the Sherlocc program and how it can be used for large scale protein family analysis of evolutionarily conserved rare codon clusters and their relation to protein function and structure. This large-scale analysis was performed using the whole Pfam database covering over 70% of the known protein sequence universe. Our program Sherlocc, detects statistically relevant conserved rare codon clusters and produces a user-friendly HTML output. Results: Statistically significant rare codon clusters were detected in a multitude of Pfam protein families. The most statistically significant rare codon clusters were predominantly identified in N-terminal Pfam families. Many of the longest rare codon clusters are found in membrane-related proteins which are required to interact with other proteins as part of their function, for example in targeting or insertion. We identified some cases where rare codon clusters can play a regulating role in the folding of catalytically important domains. Our results support the existence of a widespread functional role for rare codon clusters across species. Finally, we developed an online filter-based search interface that provides access to Sherlocc results for all Pfam families. Availability: The Sherlocc program and search interface are open access and are available at http://bcb.med.usherbrooke.ca Contact: rafael.najmanovich@usherbrooke.ca Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22467916

  7. Transcriptomic and proteomic responses of Serratia marcescens to spaceflight conditions involve large-scale changes in metabolic pathways

    Science.gov (United States)

    Wang, Yajuan; Yuan, Yanting; Liu, Jinwen; Su, Longxiang; Chang, De; Guo, Yinghua; Chen, Zhenhong; Fang, Xiangqun; Wang, Junfeng; Li, Tianzhi; Zhou, Lisha; Fang, Chengxiang; Yang, Ruifu; Liu, Changting

    2014-04-01

    The microgravity environment of spaceflight expeditions has been associated with altered microbial responses. This study explores the characterization of Serratia marcescensis grown in a spaceflight environment at the phenotypic, transcriptomic and proteomic levels. From November 1, 2011 to November 17, 2011, a strain of S. marcescensis was sent into space for 398 h on the Shenzhou VIII spacecraft, and ground simulation was performed as a control (LCT-SM213). After the flight, two mutant strains (LCT-SM166 and LCT-SM262) were selected for further analysis. Although no changes in the morphology, post-culture growth kinetics, hemolysis or antibiotic sensitivity were observed, the two mutant strains exhibited significant changes in their metabolic profiles after exposure to spaceflight. Enrichment analysis of the transcriptome showed that the differentially expressed genes of the two spaceflight strains and the ground control strain mainly included those involved in metabolism and degradation. The proteome revealed that changes at the protein level were also associated with metabolic functions, such as glycolysis/gluconeogenesis, pyruvate metabolism, arginine and proline metabolism and the degradation of valine, leucine and isoleucine. In summary S. marcescens showed alterations primarily in genes and proteins that were associated with metabolism under spaceflight conditions, which gave us valuable clues for future research.

  8. Evaluating HapMap SNP data transferability in a large-scale genotyping project involving 175 cancer-associated genes.

    Science.gov (United States)

    Ribas, Gloria; González-Neira, Anna; Salas, Antonio; Milne, Roger L; Vega, Ana; Carracedo, Begoña; González, Emilio; Barroso, Eva; Fernández, Lara P; Yankilevich, Patricio; Robledo, Mercedes; Carracedo, Angel; Benítez, Javier

    2006-02-01

    One of the many potential uses of the HapMap project is its application to the investigation of complex disease aetiology among a wide range of populations. This study aims to assess the transferability of HapMap SNP data to the Spanish population in the context of cancer research. We have carried out a genotyping study in Spanish subjects involving 175 candidate cancer genes using an indirect gene-based approach and compared results with those for HapMap CEU subjects. Allele frequencies were very consistent between the two samples, with a high positive correlation (R) of 0.91 (PHapMap CEU data using pairwise r (2) thresholds of 0.8 and 0.5 was assessed by applying these to the Spanish and current HapMap data for 66 genes. In general, the HapMap tagSNPs performed very well. Our results show generally high concordance with HapMap data in allele frequencies and haplotype distributions and confirm the applicability of HapMap SNP data to the study of complex diseases among the Spanish population.

  9. LARGE SCALE GLAZED

    DEFF Research Database (Denmark)

    Bache, Anja Margrethe

    2010-01-01

    WORLD FAMOUS ARCHITECTS CHALLENGE TODAY THE EXPOSURE OF CONCRETE IN THEIR ARCHITECTURE. IT IS MY HOPE TO BE ABLE TO COMPLEMENT THESE. I TRY TO DEVELOP NEW AESTHETIC POTENTIALS FOR THE CONCRETE AND CERAMICS, IN LARGE SCALES THAT HAS NOT BEEN SEEN BEFORE IN THE CERAMIC AREA. IT IS EXPECTED TO RESULT...

  10. Conference on Large Scale Optimization

    CERN Document Server

    Hearn, D; Pardalos, P

    1994-01-01

    On February 15-17, 1993, a conference on Large Scale Optimization, hosted by the Center for Applied Optimization, was held at the University of Florida. The con­ ference was supported by the National Science Foundation, the U. S. Army Research Office, and the University of Florida, with endorsements from SIAM, MPS, ORSA and IMACS. Forty one invited speakers presented papers on mathematical program­ ming and optimal control topics with an emphasis on algorithm development, real world applications and numerical results. Participants from Canada, Japan, Sweden, The Netherlands, Germany, Belgium, Greece, and Denmark gave the meeting an important international component. At­ tendees also included representatives from IBM, American Airlines, US Air, United Parcel Serice, AT & T Bell Labs, Thinking Machines, Army High Performance Com­ puting Research Center, and Argonne National Laboratory. In addition, the NSF sponsored attendance of thirteen graduate students from universities in the United States and abro...

  11. Large Scale Solar Heating

    DEFF Research Database (Denmark)

    Heller, Alfred

    2001-01-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out...... model is designed and validated on the Marstal case. Applying the Danish Reference Year, a design tool is presented. The simulation tool is used for proposals for application of alternative designs, including high-performance solar collector types (trough solar collectors, vaccum pipe collectors......). Simulation programs are proposed as control supporting tool for daily operation and performance prediction of central solar heating plants. Finaly the CSHP technolgy is put into persepctive with respect to alternatives and a short discussion on the barries and breakthrough of the technology are given....

  12. Large scale tracking algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  13. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  14. An efficient large-scale retroviral transduction method involving preloading the vector into a RetroNectin-coated bag with low-temperature shaking.

    Science.gov (United States)

    Dodo, Katsuyuki; Chono, Hideto; Saito, Naoki; Tanaka, Yoshinori; Tahara, Kenichi; Nukaya, Ikuei; Mineno, Junichi

    2014-01-01

    In retroviral vector-mediated gene transfer, transduction efficiency can be hampered by inhibitory molecules derived from the culture fluid of virus producer cell lines. To remove these inhibitory molecules to enable better gene transduction, we had previously developed a transduction method using a fibronectin fragment-coated vessel (i.e., the RetroNectin-bound virus transduction method). In the present study, we developed a method that combined RetroNectin-bound virus transduction with low-temperature shaking and applied this method in manufacturing autologous retroviral-engineered T cells for adoptive transfer gene therapy in a large-scale closed system. Retroviral vector was preloaded into a RetroNectin-coated bag and incubated at 4°C for 16 h on a reciprocating shaker at 50 rounds per minute. After the supernatant was removed, activated T cells were added to the bag. The bag transduction method has the advantage of increasing transduction efficiency, as simply flipping over the bag during gene transduction facilitates more efficient utilization of the retroviral vector adsorbed on the top and bottom surfaces of the bag. Finally, we performed validation runs of endoribonuclease MazF-modified CD4(+) T cell manufacturing for HIV-1 gene therapy and T cell receptor-modified T cell manufacturing for MAGE-A4 antigen-expressing cancer gene therapy and achieved over 200-fold (≥ 10(10)) and 100-fold (≥ 5 × 10(9)) expansion, respectively. In conclusion, we demonstrated that the large-scale closed transduction system is highly efficient for retroviral vector-based T cell manufacturing for adoptive transfer gene therapy, and this technology is expected to be amenable to automation and improve current clinical gene therapy protocols.

  15. An efficient large-scale retroviral transduction method involving preloading the vector into a RetroNectin-coated bag with low-temperature shaking.

    Directory of Open Access Journals (Sweden)

    Katsuyuki Dodo

    Full Text Available In retroviral vector-mediated gene transfer, transduction efficiency can be hampered by inhibitory molecules derived from the culture fluid of virus producer cell lines. To remove these inhibitory molecules to enable better gene transduction, we had previously developed a transduction method using a fibronectin fragment-coated vessel (i.e., the RetroNectin-bound virus transduction method. In the present study, we developed a method that combined RetroNectin-bound virus transduction with low-temperature shaking and applied this method in manufacturing autologous retroviral-engineered T cells for adoptive transfer gene therapy in a large-scale closed system. Retroviral vector was preloaded into a RetroNectin-coated bag and incubated at 4°C for 16 h on a reciprocating shaker at 50 rounds per minute. After the supernatant was removed, activated T cells were added to the bag. The bag transduction method has the advantage of increasing transduction efficiency, as simply flipping over the bag during gene transduction facilitates more efficient utilization of the retroviral vector adsorbed on the top and bottom surfaces of the bag. Finally, we performed validation runs of endoribonuclease MazF-modified CD4(+ T cell manufacturing for HIV-1 gene therapy and T cell receptor-modified T cell manufacturing for MAGE-A4 antigen-expressing cancer gene therapy and achieved over 200-fold (≥ 10(10 and 100-fold (≥ 5 × 10(9 expansion, respectively. In conclusion, we demonstrated that the large-scale closed transduction system is highly efficient for retroviral vector-based T cell manufacturing for adoptive transfer gene therapy, and this technology is expected to be amenable to automation and improve current clinical gene therapy protocols.

  16. Large-scale RNAi screen of G protein-coupled receptors involved in larval growth, molting and metamorphosis in the red flour beetle

    Directory of Open Access Journals (Sweden)

    Shah Kapil

    2011-08-01

    Full Text Available Abstract Background The G protein-coupled receptors (GPCRs belong to the largest superfamily of integral cell membrane proteins and play crucial roles in physiological processes including behavior, development and reproduction. Because of their broad and diverse roles in cellular signaling, GPCRs are the therapeutic targets for many prescription drugs. However, there is no commercial pesticide targeting insect GPCRs. In this study, we employed functional genomics methods and used the red flour beetle, Tribolium castaneum, as a model system to study the physiological roles of GPCRs during the larval growth, molting and metamorphosis. Results A total of 111 non-sensory GPCRs were identified in the T. castaneum genome. Thirty-nine of them were not reported previously. Large-scale RNA interference (RNAi screen was used to study the function of all these GPCRs during immature stages. Double-stranded RNA (dsRNA-mediated knockdown in the expression of genes coding for eight GPCRs caused severe developmental arrest and ecdysis failure (with more than 90% mortality after dsRNA injection. These GPCRs include dopamine-2 like receptor (TC007490/D2R and latrophilin receptor (TC001872/Cirl. The majority of larvae injected with TC007490/D2R dsRNA died during larval stage prior to entering pupal stage, suggesting that this GPCR is essential for larval growth and development. Conclusions The results from our study revealed the physiological roles of some GPCRs in T. castaneum. These findings could help in development of novel pesticides targeting these GPCRs.

  17. Cronkhite-Canada Syndrome: Gastric Involvement Diagnosed by MDCT

    Directory of Open Access Journals (Sweden)

    Jonathan D. Samet

    2009-01-01

    Full Text Available Chronkhite-Canada is a rare nonfamilial polyposis syndrome that usually presents as chronic malabsorption in adults. We present a case of a-73-year old woman with chronic gastrointestinal bleeding and malnutrition. On CT imaging she was found to have massive gastric polyps, which on biopsy was most consistent with Cronkhite-Canada syndrome.

  18. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  19. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data dissem

  20. Very Large Scale Integration (VLSI).

    Science.gov (United States)

    Yeaman, Andrew R. J.

    Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…

  1. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  2. A consumer involvement model for health technology assessment in Canada.

    Science.gov (United States)

    Pivik, Jayne; Rode, Elisabeth; Ward, Christopher

    2004-08-01

    Similar to other health policy initiatives, there is a growing movement to involve consumers in decisions affecting their treatment options. Access to treatments can be impacted by decisions made during a health technology assessment (HTA), i.e., the rigorous assessment of medical interventions such as drugs, vaccines, devices, materials, medical and surgical procedures and systems. The purpose of this paper was to empirically assess the interest and potential mechanisms for consumer involvement in HTA by identifying what health consumer organizations consider meaningful involvement, examining current practices internationally and developing a model for involvement based on identified priorities and needs. Canadian health consumer groups representing the largest disease or illness conditions reported a desire for involvement in HTA and provided feedback on mechanisms for facilitating their involvement.

  3. Strings and large scale magnetohydrodynamics

    CERN Document Server

    Olesen, P

    1995-01-01

    From computer simulations of magnetohydrodynamics one knows that a turbulent plasma becomes very intermittent, with the magnetic fields concentrated in thin flux tubes. This situation looks very "string-like", so we investigate whether strings could be solutions of the magnetohydrodynamics equations in the limit of infinite conductivity. We find that the induction equation is satisfied, and we discuss the Navier-Stokes equation (without viscosity) with the Lorentz force included. We argue that the string equations (with non-universal maximum velocity) should describe the large scale motion of narrow magnetic flux tubes, because of a large reparametrization (gauge) invariance of the magnetic and electric string fields.

  4. Models of large scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Frenk, C.S. (Physics Dept., Univ. of Durham (UK))

    1991-01-01

    The ingredients required to construct models of the cosmic large scale structure are discussed. Input from particle physics leads to a considerable simplification by offering concrete proposals for the geometry of the universe, the nature of the dark matter and the primordial fluctuations that seed the growth of structure. The remaining ingredient is the physical interaction that governs dynamical evolution. Empirical evidence provided by an analysis of a redshift survey of IRAS galaxies suggests that gravity is the main agent shaping the large-scale structure. In addition, this survey implies large values of the mean cosmic density, {Omega}> or approx.0.5, and is consistent with a flat geometry if IRAS galaxies are somewhat more clustered than the underlying mass. Together with current limits on the density of baryons from Big Bang nucleosynthesis, this lends support to the idea of a universe dominated by non-baryonic dark matter. Results from cosmological N-body simulations evolved from a variety of initial conditions are reviewed. In particular, neutrino dominated and cold dark matter dominated universes are discussed in detail. Finally, it is shown that apparent periodicities in the redshift distributions in pencil-beam surveys arise frequently from distributions which have no intrinsic periodicity but are clustered on small scales. (orig.).

  5. Testing gravity on Large Scales

    Directory of Open Access Journals (Sweden)

    Raccanelli Alvise

    2013-09-01

    Full Text Available We show how it is possible to test general relativity and different models of gravity via Redshift-Space Distortions using forthcoming cosmological galaxy surveys. However, the theoretical models currently used to interpret the data often rely on simplifications that make them not accurate enough for precise measurements. We will discuss improvements to the theoretical modeling at very large scales, including wide-angle and general relativistic corrections; we then show that for wide and deep surveys those corrections need to be taken into account if we want to measure the growth of structures at a few percent level, and so perform tests on gravity, without introducing systematic errors. Finally, we report the results of some recent cosmological model tests carried out using those precise models.

  6. Large-Scale Galaxy Bias

    CERN Document Server

    Desjacques, Vincent; Schmidt, Fabian

    2016-01-01

    This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a pedagogical proof of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which includes the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in i...

  7. Large Scale Magnetostrictive Valve Actuator

    Science.gov (United States)

    Richard, James A.; Holleman, Elizabeth; Eddleman, David

    2008-01-01

    Marshall Space Flight Center's Valves, Actuators and Ducts Design and Development Branch developed a large scale magnetostrictive valve actuator. The potential advantages of this technology are faster, more efficient valve actuators that consume less power and provide precise position control and deliver higher flow rates than conventional solenoid valves. Magnetostrictive materials change dimensions when a magnetic field is applied; this property is referred to as magnetostriction. Magnetostriction is caused by the alignment of the magnetic domains in the material s crystalline structure and the applied magnetic field lines. Typically, the material changes shape by elongating in the axial direction and constricting in the radial direction, resulting in no net change in volume. All hardware and testing is complete. This paper will discuss: the potential applications of the technology; overview of the as built actuator design; discuss problems that were uncovered during the development testing; review test data and evaluate weaknesses of the design; and discuss areas for improvement for future work. This actuator holds promises of a low power, high load, proportionally controlled actuator for valves requiring 440 to 1500 newtons load.

  8. Large scale cluster computing workshop

    Energy Technology Data Exchange (ETDEWEB)

    Dane Skow; Alan Silverman

    2002-12-23

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community.

  9. Handbook of Large-Scale Random Networks

    CERN Document Server

    Bollobas, Bela; Miklos, Dezso

    2008-01-01

    Covers various aspects of large-scale networks, including mathematical foundations and rigorous results of random graph theory, modeling and computational aspects of large-scale networks, as well as areas in physics, biology, neuroscience, sociology and technical areas

  10. RESTRUCTURING OF THE LARGE-SCALE SPRINKLERS

    Directory of Open Access Journals (Sweden)

    Paweł Kozaczyk

    2016-09-01

    Full Text Available One of the best ways for agriculture to become independent from shortages of precipitation is irrigation. In the seventies and eighties of the last century a number of large-scale sprinklers in Wielkopolska was built. At the end of 1970’s in the Poznan province 67 sprinklers with a total area of 6400 ha were installed. The average size of the sprinkler reached 95 ha. In 1989 there were 98 sprinklers, and the area which was armed with them was more than 10 130 ha. The study was conducted on 7 large sprinklers with the area ranging from 230 to 520 hectares in 1986÷1998. After the introduction of the market economy in the early 90’s and ownership changes in agriculture, large-scale sprinklers have gone under a significant or total devastation. Land on the State Farms of the State Agricultural Property Agency has leased or sold and the new owners used the existing sprinklers to a very small extent. This involved a change in crop structure, demand structure and an increase in operating costs. There has also been a threefold increase in electricity prices. Operation of large-scale irrigation encountered all kinds of barriers in practice and limitations of system solutions, supply difficulties, high levels of equipment failure which is not inclined to rational use of available sprinklers. An effect of a vision of the local area was to show the current status of the remaining irrigation infrastructure. The adopted scheme for the restructuring of Polish agriculture was not the best solution, causing massive destruction of assets previously invested in the sprinkler system.

  11. Large-Scale Information Systems

    Energy Technology Data Exchange (ETDEWEB)

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  12. Conundrum of the Large Scale Streaming

    CERN Document Server

    Malm, T M

    1999-01-01

    The etiology of the large scale peculiar velocity (large scale streaming motion) of clusters would increasingly seem more tenuous, within the context of the gravitational instability hypothesis. Are there any alternative testable models possibly accounting for such large scale streaming of clusters?

  13. Curvature constraints from Large Scale Structure

    CERN Document Server

    Di Dio, Enea; Raccanelli, Alvise; Durrer, Ruth; Kamionkowski, Marc; Lesgourgues, Julien

    2016-01-01

    We modified the CLASS code in order to include relativistic galaxy number counts in spatially curved geometries; we present the formalism and study the effect of relativistic corrections on spatial curvature. The new version of the code is now publicly available. Using a Fisher matrix analysis, we investigate how measurements of the spatial curvature parameter $\\Omega_K$ with future galaxy surveys are affected by relativistic effects, which influence observations of the large scale galaxy distribution. These effects include contributions from cosmic magnification, Doppler terms and terms involving the gravitational potential. As an application, we consider angle and redshift dependent power spectra, which are especially well suited for model independent cosmological constraints. We compute our results for a representative deep, wide and spectroscopic survey, and our results show the impact of relativistic corrections on the spatial curvature parameter estimation. We show that constraints on the curvature para...

  14. Introducing Large-Scale Innovation in Schools

    Science.gov (United States)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-08-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  15. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  16. Vector dissipativity theory for large-scale impulsive dynamical systems

    Directory of Open Access Journals (Sweden)

    Haddad Wassim M.

    2004-01-01

    Full Text Available Modern complex large-scale impulsive systems involve multiple modes of operation placing stringent demands on controller analysis of increasing complexity. In analyzing these large-scale systems, it is often desirable to treat the overall impulsive system as a collection of interconnected impulsive subsystems. Solution properties of the large-scale impulsive system are then deduced from the solution properties of the individual impulsive subsystems and the nature of the impulsive system interconnections. In this paper, we develop vector dissipativity theory for large-scale impulsive dynamical systems. Specifically, using vector storage functions and vector hybrid supply rates, dissipativity properties of the composite large-scale impulsive systems are shown to be determined from the dissipativity properties of the impulsive subsystems and their interconnections. Furthermore, extended Kalman-Yakubovich-Popov conditions, in terms of the impulsive subsystem dynamics and interconnection constraints, characterizing vector dissipativeness via vector system storage functions, are derived. Finally, these results are used to develop feedback interconnection stability results for large-scale impulsive dynamical systems using vector Lyapunov functions.

  17. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  18. Newton Methods for Large Scale Problems in Machine Learning

    Science.gov (United States)

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  19. Measurement, Sampling, and Equating Errors in Large-Scale Assessments

    Science.gov (United States)

    Wu, Margaret

    2010-01-01

    In large-scale assessments, such as state-wide testing programs, national sample-based assessments, and international comparative studies, there are many steps involved in the measurement and reporting of student achievement. There are always sources of inaccuracies in each of the steps. It is of interest to identify the source and magnitude of…

  20. Accelerating sustainability in large-scale facilities

    CERN Multimedia

    Marina Giampietro

    2011-01-01

    Scientific research centres and large-scale facilities are intrinsically energy intensive, but how can big science improve its energy management and eventually contribute to the environmental cause with new cleantech? CERN’s commitment to providing tangible answers to these questions was sealed in the first workshop on energy management for large scale scientific infrastructures held in Lund, Sweden, on the 13-14 October.   Participants at the energy management for large scale scientific infrastructures workshop. The workshop, co-organised with the European Spallation Source (ESS) and  the European Association of National Research Facilities (ERF), tackled a recognised need for addressing energy issues in relation with science and technology policies. It brought together more than 150 representatives of Research Infrastrutures (RIs) and energy experts from Europe and North America. “Without compromising our scientific projects, we can ...

  1. Topological Routing in Large-Scale Networks

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Knudsen, Thomas Phillip; Madsen, Ole Brun

    2004-01-01

    A new routing scheme, Topological Routing, for large-scale networks is proposed. It allows for efficient routing without large routing tables as known from traditional routing schemes. It presupposes a certain level of order in the networks, known from Structural QoS. The main issues in applying...... Topological Routing to large-scale networks are discussed. Hierarchical extensions are presented along with schemes for shortest path routing, fault handling and path restoration. Further reserach in the area is discussed and perspectives on the prerequisites for practical deployment of Topological Routing...

  2. Topological Routing in Large-Scale Networks

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Knudsen, Thomas Phillip; Madsen, Ole Brun

    A new routing scheme, Topological Routing, for large-scale networks is proposed. It allows for efficient routing without large routing tables as known from traditional routing schemes. It presupposes a certain level of order in the networks, known from Structural QoS. The main issues in applying...... Topological Routing to large-scale networks are discussed. Hierarchical extensions are presented along with schemes for shortest path routing, fault handling and path restoration. Further reserach in the area is discussed and perspectives on the prerequisites for practical deployment of Topological Routing...

  3. Large-scale Complex IT Systems

    CERN Document Server

    Sommerville, Ian; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2011-01-01

    This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that identifies the major challenges and issues in the development of large-scale complex, software-intensive systems. Central to this is the notion that we cannot separate software from the socio-technical environment in which it is used.

  4. Large-Scale Analysis of Art Proportions

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2014-01-01

    While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square) and with majo......While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square...

  5. Large scale topic modeling made practical

    DEFF Research Database (Denmark)

    Wahlgreen, Bjarne Ørum; Hansen, Lars Kai

    2011-01-01

    Topic models are of broad interest. They can be used for query expansion and result structuring in information retrieval and as an important component in services such as recommender systems and user adaptive advertising. In large scale applications both the size of the database (number of docume......Topic models are of broad interest. They can be used for query expansion and result structuring in information retrieval and as an important component in services such as recommender systems and user adaptive advertising. In large scale applications both the size of the database (number...... topics at par with a much larger case specific vocabulary....

  6. Inflation, large scale structure and particle physics

    Indian Academy of Sciences (India)

    S F King

    2004-02-01

    We review experimental and theoretical developments in inflation and its application to structure formation, including the curvation idea. We then discuss a particle physics model of supersymmetric hybrid inflation at the intermediate scale in which the Higgs scalar field is responsible for large scale structure, show how such a theory is completely natural in the framework extra dimensions with an intermediate string scale.

  7. Ensemble methods for large scale inverse problems

    NARCIS (Netherlands)

    Heemink, A.W.; Umer Altaf, M.; Barbu, A.L.; Verlaan, M.

    2013-01-01

    Variational data assimilation, also sometimes simply called the ‘adjoint method’, is used very often for large scale model calibration problems. Using the available data, the uncertain parameters in the model are identified by minimizing a certain cost function that measures the difference between t

  8. Ethics of large-scale change

    DEFF Research Database (Denmark)

    Arler, Finn

    2006-01-01

    , which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, the neoclassical economists' approach, and finally the so-called Concentric Circle Theories approach...

  9. Large-scale perspective as a challenge

    NARCIS (Netherlands)

    Plomp, M.G.A.

    2012-01-01

    1. Scale forms a challenge for chain researchers: when exactly is something ‘large-scale’? What are the underlying factors (e.g. number of parties, data, objects in the chain, complexity) that determine this? It appears to be a continuum between small- and large-scale, where positioning on that cont

  10. Sensitivity analysis for large-scale problems

    Science.gov (United States)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  11. Large-scale multimedia modeling applications

    Energy Technology Data Exchange (ETDEWEB)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications.

  12. Large-Scale Spacecraft Fire Safety Tests

    Science.gov (United States)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; Toth, Balazs; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Jomaas, Grunde

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests

  13. Large-scale structure of the Universe

    Energy Technology Data Exchange (ETDEWEB)

    Shandarin, S.F.; Doroshkevich, A.G.; Zel' dovich, Ya.B. (Inst. Prikladnoj Matematiki, Moscow, USSR)

    1983-01-01

    A review of theory of the large-scale structure of the Universe is given, including formation of clusters and superclusters of galaxies as well as large voids. Particular attention is paid to the theory of neutrino dominated Universe - the cosmological model where neutrinos with the rest mass of several tens eV dominate the mean density. Evolution of small perturbations is discussed, estimates of microwave backgorund radiation fluctuations is given for different angular scales. Adiabatic theory of the Universe structure formation, known as ''cake'' scenario and their successive fragmentation is given. This scenario is based on approximate nonlinear theory of gravitation instability. Results of numerical experiments, modeling the processes of large-scale structure formation are discussed.

  14. Large-scale structure of the universe

    Energy Technology Data Exchange (ETDEWEB)

    Shandarin, S.F.; Doroshkevich, A.G.; Zel' dovich, Y.B.

    1983-01-01

    A survey is given of theories for the origin of large-scale structure in the universe: clusters and superclusters of galaxies, and vast black regions practically devoid of galaxies. Special attention is paid to the theory of a neutrino-dominated universe: a cosmology in which electron neutrinos with a rest mass of a few tens of electron volts would contribute the bulk of the mean density. The evolution of small perturbations is discussed, and estimates are made for the temperature anisotropy of the microwave background radiation on various angular scales. The nonlinear stage in the evolution of smooth irrotational perturbations in a low-pressure medium is described in detail. Numerical experiments simulating large-scale structure formation processes are discussed, as well as their interpretation in the context of catastrophe theory.

  15. Condition Monitoring of Large-Scale Facilities

    Science.gov (United States)

    Hall, David L.

    1999-01-01

    This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.

  16. Quantum Signature of Cosmological Large Scale Structures

    CERN Document Server

    Capozziello, S; De Siena, S; Illuminati, F; Capozziello, Salvatore; Martino, Salvatore De; Siena, Silvio De; Illuminati, Fabrizio

    1998-01-01

    We demonstrate that to all large scale cosmological structures where gravitation is the only overall relevant interaction assembling the system (e.g. galaxies), there is associated a characteristic unit of action per particle whose order of magnitude coincides with the Planck action constant $h$. This result extends the class of physical systems for which quantum coherence can act on macroscopic scales (as e.g. in superconductivity) and agrees with the absence of screening mechanisms for the gravitational forces, as predicted by some renormalizable quantum field theories of gravity. It also seems to support those lines of thought invoking that large scale structures in the Universe should be connected to quantum primordial perturbations as requested by inflation, that the Newton constant should vary with time and distance and, finally, that gravity should be considered as an effective interaction induced by quantization.

  17. Wireless Secrecy in Large-Scale Networks

    CERN Document Server

    Pinto, Pedro C; Win, Moe Z

    2011-01-01

    The ability to exchange secret information is critical to many commercial, governmental, and military networks. The intrinsically secure communications graph (iS-graph) is a random graph which describes the connections that can be securely established over a large-scale network, by exploiting the physical properties of the wireless medium. This paper provides an overview of the main properties of this new class of random graphs. We first analyze the local properties of the iS-graph, namely the degree distributions and their dependence on fading, target secrecy rate, and eavesdropper collusion. To mitigate the effect of the eavesdroppers, we propose two techniques that improve secure connectivity. Then, we analyze the global properties of the iS-graph, namely percolation on the infinite plane, and full connectivity on a finite region. These results help clarify how the presence of eavesdroppers can compromise secure communication in a large-scale network.

  18. ELASTIC: A Large Scale Dynamic Tuning Environment

    Directory of Open Access Journals (Sweden)

    Andrea Martínez

    2014-01-01

    Full Text Available The spectacular growth in the number of cores in current supercomputers poses design challenges for the development of performance analysis and tuning tools. To be effective, such analysis and tuning tools must be scalable and be able to manage the dynamic behaviour of parallel applications. In this work, we present ELASTIC, an environment for dynamic tuning of large-scale parallel applications. To be scalable, the architecture of ELASTIC takes the form of a hierarchical tuning network of nodes that perform a distributed analysis and tuning process. Moreover, the tuning network topology can be configured to adapt itself to the size of the parallel application. To guide the dynamic tuning process, ELASTIC supports a plugin architecture. These plugins, called ELASTIC packages, allow the integration of different tuning strategies into ELASTIC. We also present experimental tests conducted using ELASTIC, showing its effectiveness to improve the performance of large-scale parallel applications.

  19. Statistical characteristics of Large Scale Structure

    OpenAIRE

    Demianski; Doroshkevich

    2002-01-01

    We investigate the mass functions of different elements of the Large Scale Structure -- walls, pancakes, filaments and clouds -- and the impact of transverse motions -- expansion and/or compression -- on their statistical characteristics. Using the Zel'dovich theory of gravitational instability we show that the mass functions of all structure elements are approximately the same and the mass of all elements is found to be concentrated near the corresponding mean mass. At high redshifts, both t...

  20. Topologies for large scale photovoltaic power plants

    OpenAIRE

    Cabrera Tobar, Ana; Bullich Massagué, Eduard; Aragüés Peñalba, Mònica; Gomis Bellmunt, Oriol

    2016-01-01

    © 2016 Elsevier Ltd. All rights reserved. The concern of increasing renewable energy penetration into the grid together with the reduction of prices of photovoltaic solar panels during the last decade have enabled the development of large scale solar power plants connected to the medium and high voltage grid. Photovoltaic generation components, the internal layout and the ac collection grid are being investigated for ensuring the best design, operation and control of these power plants. This ...

  1. Measuring Bulk Flows in Large Scale Surveys

    CERN Document Server

    Feldman, H A; Feldman, Hume A.; Watkins, Richard

    1993-01-01

    We follow a formalism presented by Kaiser to calculate the variance of bulk flows in large scale surveys. We apply the formalism to a mock survey of Abell clusters \\'a la Lauer \\& Postman and find the variance in the expected bulk velocities in a universe with CDM, MDM and IRAS--QDOT power spectra. We calculate the velocity variance as a function of the 1--D velocity dispersion of the clusters and the size of the survey.

  2. Large-scale neuromorphic computing systems

    Science.gov (United States)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  3. Large scale processing of dielectric electroactive polymers

    DEFF Research Database (Denmark)

    Vudayagiri, Sindhu

    Efficient processing techniques are vital to the success of any manufacturing industry. The processing techniques determine the quality of the products and thus to a large extent the performance and reliability of the products that are manufactured. The dielectric electroactive polymer (DEAP) tec....... This new technique developed on a lab scale is quick, economical, easy and can be implemented on a large scale. The results of all our experiments and the hot embossing technique have been discussed.......) technology is relatively new and is in the initial stages of development with no established large scale manufacturing techniques. Danfoss Polypower A/S has set up a large scale manufacture process to make thin film DEAP transducers. The DEAP transducers developed by Danfoss Polypower consist...... of microstructured elastomer surfaces on which the compliant metallic electrodes are sputtered thus enabling large strains of non-stretchable metal electrode. Thin microstructured polydimethlysiloxane (PDMS) films are quintessential in DEAP technology due to scaling of their actuation strain with the reciprocal...

  4. Large-Scale Visual Data Analysis

    Science.gov (United States)

    Johnson, Chris

    2014-04-01

    Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.

  5. Economically viable large-scale hydrogen liquefaction

    Science.gov (United States)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  6. Large-scale instabilities of helical flows

    CERN Document Server

    Cameron, Alexandre; Brachet, Marc-Étienne

    2016-01-01

    Large-scale hydrodynamic instabilities of periodic helical flows are investigated using $3$D Floquet numerical computations. A minimal three-modes analytical model that reproduce and explains some of the full Floquet results is derived. The growth-rate $\\sigma$ of the most unstable modes (at small scale, low Reynolds number $Re$ and small wavenumber $q$) is found to scale differently in the presence or absence of anisotropic kinetic alpha (\\AKA{}) effect. When an $AKA$ effect is present the scaling $\\sigma \\propto q\\; Re\\,$ predicted by the $AKA$ effect theory [U. Frisch, Z. S. She, and P. L. Sulem, Physica D: Nonlinear Phenomena 28, 382 (1987)] is recovered for $Re\\ll 1$ as expected (with most of the energy of the unstable mode concentrated in the large scales). However, as $Re$ increases, the growth-rate is found to saturate and most of the energy is found at small scales. In the absence of \\AKA{} effect, it is found that flows can still have large-scale instabilities, but with a negative eddy-viscosity sca...

  7. Supermassive black holes, large scale structure and holography

    CERN Document Server

    Mongan, T R

    2013-01-01

    A holographic analysis of large scale structure in the universe estimates the mass of supermassive black holes at the center of large scale structures with matter density varying inversely as the square of the distance from their center. The estimate is consistent with two important test cases involving observations of the supermassive black hole with mass 3.6\\times10^{-6} times the galactic mass in Sagittarius A^{*} near the center of our Milky Way and the 2\\times10^{9} solar mass black hole in the quasar ULAS J112001.48+064124.3 at redshift z=7.085. It is also consistent with upper bounds on central black hole masses in globular clusters M15, M19 and M22 developed using the Jansky Very Large Array in New Mexico.

  8. Quantum noise in large-scale coherent nonlinear photonic circuits

    CERN Document Server

    Santori, Charles; Beausoleil, Raymond G; Tezak, Nikolas; Hamerly, Ryan; Mabuchi, Hideo

    2014-01-01

    A semiclassical simulation approach is presented for studying quantum noise in large-scale photonic circuits incorporating an ideal Kerr nonlinearity. A netlist-based circuit solver is used to generate matrices defining a set of stochastic differential equations, in which the resonator field variables represent random samplings of the Wigner quasi-probability distributions. Although the semiclassical approach involves making a large-photon-number approximation, tests on one- and two-resonator circuits indicate satisfactory agreement between the semiclassical and full-quantum simulation results in the parameter regime of interest. The semiclassical model is used to simulate random errors in a large-scale circuit that contains 88 resonators and hundreds of components in total, and functions as a 4-bit ripple counter. The error rate as a function of on-state photon number is examined, and it is observed that the quantum fluctuation amplitudes do not increase as signals propagate through the circuit, an important...

  9. Stabilization Algorithms for Large-Scale Problems

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg

    2006-01-01

    The focus of the project is on stabilization of large-scale inverse problems where structured models and iterative algorithms are necessary for computing approximate solutions. For this purpose, we study various iterative Krylov methods and their abilities to produce regularized solutions. Some......-curve. This heuristic is implemented as a part of a larger algorithm which is developed in collaboration with G. Rodriguez and P. C. Hansen. Last, but not least, a large part of the project has, in different ways, revolved around the object-oriented Matlab toolbox MOORe Tools developed by PhD Michael Jacobsen. New...

  10. Large scale wind power penetration in Denmark

    DEFF Research Database (Denmark)

    Karnøe, Peter

    2013-01-01

    he Danish electricity generating system prepared to adopt nuclear power in the 1970s, yet has become the world's front runner in wind power with a national plan for 50% wind power penetration by 2020. This paper deploys a sociotechnical perspective to explain the historical transformation of "net...... expertise evolves and contributes to the normalization and large-scale penetration of wind power in the electricity generating system. The analysis teaches us how technological paths become locked-in, but also indicates keys for locking them out....

  11. Colloquium: Large scale simulations on GPU clusters

    Science.gov (United States)

    Bernaschi, Massimo; Bisson, Mauro; Fatica, Massimiliano

    2015-06-01

    Graphics processing units (GPU) are currently used as a cost-effective platform for computer simulations and big-data processing. Large scale applications require that multiple GPUs work together but the efficiency obtained with cluster of GPUs is, at times, sub-optimal because the GPU features are not exploited at their best. We describe how it is possible to achieve an excellent efficiency for applications in statistical mechanics, particle dynamics and networks analysis by using suitable memory access patterns and mechanisms like CUDA streams, profiling tools, etc. Similar concepts and techniques may be applied also to other problems like the solution of Partial Differential Equations.

  12. What is a large-scale dynamo?

    Science.gov (United States)

    Nigro, G.; Pongkitiwanichakul, P.; Cattaneo, F.; Tobias, S. M.

    2017-01-01

    We consider kinematic dynamo action in a sheared helical flow at moderate to high values of the magnetic Reynolds number (Rm). We find exponentially growing solutions which, for large enough shear, take the form of a coherent part embedded in incoherent fluctuations. We argue that at large Rm large-scale dynamo action should be identified by the presence of structures coherent in time, rather than those at large spatial scales. We further argue that although the growth rate is determined by small-scale processes, the period of the coherent structures is set by mean-field considerations.

  13. Conformal Anomaly and Large Scale Gravitational Coupling

    CERN Document Server

    Salehi, H

    2000-01-01

    We present a model in which the breackdown of conformal symmetry of a quantum stress-tensor due to the trace anomaly is related to a cosmological effect in a gravitational model. This is done by characterizing the traceless part of the quantum stress-tensor in terms of the stress-tensor of a conformal invariant classical scalar field. We introduce a conformal frame in which the anomalous trace is identified with a cosmological constant. In this conformal frame we establish the Einstein field equations by connecting the quantum stress-tensor with the large scale distribution of matter in the universe.

  14. Large scale phononic metamaterials for seismic isolation

    Energy Technology Data Exchange (ETDEWEB)

    Aravantinos-Zafiris, N. [Department of Sound and Musical Instruments Technology, Ionian Islands Technological Educational Institute, Stylianou Typaldou ave., Lixouri 28200 (Greece); Sigalas, M. M. [Department of Materials Science, University of Patras, Patras 26504 (Greece)

    2015-08-14

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials.

  15. Large Scale Quantum Simulations of Nuclear Pasta

    Science.gov (United States)

    Fattoyev, Farrukh J.; Horowitz, Charles J.; Schuetrumpf, Bastian

    2016-03-01

    Complex and exotic nuclear geometries collectively referred to as ``nuclear pasta'' are expected to naturally exist in the crust of neutron stars and in supernovae matter. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities 0 . 03 pasta configurations. This work is supported in part by DOE Grants DE-FG02-87ER40365 (Indiana University) and DE-SC0008808 (NUCLEI SciDAC Collaboration).

  16. Large-Scale PV Integration Study

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  17. Large-scale Globally Propagating Coronal Waves

    Directory of Open Access Journals (Sweden)

    Alexander Warmuth

    2015-09-01

    Full Text Available Large-scale, globally propagating wave-like disturbances have been observed in the solar chromosphere and by inference in the corona since the 1960s. However, detailed analysis of these phenomena has only been conducted since the late 1990s. This was prompted by the availability of high-cadence coronal imaging data from numerous spaced-based instruments, which routinely show spectacular globally propagating bright fronts. Coronal waves, as these perturbations are usually referred to, have now been observed in a wide range of spectral channels, yielding a wealth of information. Many findings have supported the “classical” interpretation of the disturbances: fast-mode MHD waves or shocks that are propagating in the solar corona. However, observations that seemed inconsistent with this picture have stimulated the development of alternative models in which “pseudo waves” are generated by magnetic reconfiguration in the framework of an expanding coronal mass ejection. This has resulted in a vigorous debate on the physical nature of these disturbances. This review focuses on demonstrating how the numerous observational findings of the last one and a half decades can be used to constrain our models of large-scale coronal waves, and how a coherent physical understanding of these disturbances is finally emerging.

  18. Large-scale data mining pilot project in human genome

    Energy Technology Data Exchange (ETDEWEB)

    Musick, R.; Fidelis, R.; Slezak, T.

    1997-05-01

    This whitepaper briefly describes a new, aggressive effort in large- scale data Livermore National Labs. The implications of `large- scale` will be clarified Section. In the short term, this effort will focus on several @ssion-critical questions of Genome project. We will adapt current data mining techniques to the Genome domain, to quantify the accuracy of inference results, and lay the groundwork for a more extensive effort in large-scale data mining. A major aspect of the approach is that we will be fully-staffed data warehousing effort in the human Genome area. The long term goal is strong applications- oriented research program in large-@e data mining. The tools, skill set gained will be directly applicable to a wide spectrum of tasks involving a for large spatial and multidimensional data. This includes applications in ensuring non-proliferation, stockpile stewardship, enabling Global Ecology (Materials Database Industrial Ecology), advancing the Biosciences (Human Genome Project), and supporting data for others (Battlefield Management, Health Care).

  19. Multivariate Clustering of Large-Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Eliassi-Rad, T; Critchlow, T

    2003-06-13

    Simulations of complex scientific phenomena involve the execution of massively parallel computer programs. These simulation programs generate large-scale data sets over the spatio-temporal space. Modeling such massive data sets is an essential step in helping scientists discover new information from their computer simulations. In this paper, we present a simple but effective multivariate clustering algorithm for large-scale scientific simulation data sets. Our algorithm utilizes the cosine similarity measure to cluster the field variables in a data set. Field variables include all variables except the spatial (x, y, z) and temporal (time) variables. The exclusion of the spatial dimensions is important since ''similar'' characteristics could be located (spatially) far from each other. To scale our multivariate clustering algorithm for large-scale data sets, we take advantage of the geometrical properties of the cosine similarity measure. This allows us to reduce the modeling time from O(n{sup 2}) to O(n x g(f(u))), where n is the number of data points, f(u) is a function of the user-defined clustering threshold, and g(f(u)) is the number of data points satisfying f(u). We show that on average g(f(u)) is much less than n. Finally, even though spatial variables do not play a role in building clusters, it is desirable to associate each cluster with its correct spatial region. To achieve this, we present a linking algorithm for connecting each cluster to the appropriate nodes of the data set's topology tree (where the spatial information of the data set is stored). Our experimental evaluations on two large-scale simulation data sets illustrate the value of our multivariate clustering and linking algorithms.

  20. UAV Data Processing for Large Scale Topographical Mapping

    Science.gov (United States)

    Tampubolon, W.; Reinhardt, W.

    2014-06-01

    Large scale topographical mapping in the third world countries is really a prominent challenge in geospatial industries nowadays. On one side the demand is significantly increasing while on the other hand it is constrained by limited budgets available for mapping projects. Since the advent of Act Nr.4/yr.2011 about Geospatial Information in Indonesia, large scale topographical mapping has been on high priority for supporting the nationwide development e.g. detail spatial planning. Usually large scale topographical mapping relies on conventional aerial survey campaigns in order to provide high resolution 3D geospatial data sources. Widely growing on a leisure hobby, aero models in form of the so-called Unmanned Aerial Vehicle (UAV) bring up alternative semi photogrammetric aerial data acquisition possibilities suitable for relatively small Area of Interest (AOI) i.e. Indonesia this area size can be used as a mapping unit since it usually concentrates on the basis of sub district area (kecamatan) level. In this paper different camera and processing software systems will be further analyzed for identifying the best optimum UAV data acquisition campaign components in combination with the data processing scheme. The selected AOI is covering the cultural heritage of Borobudur Temple as one of the Seven Wonders of the World. A detailed accuracy assessment will be concentrated within the object feature of the temple at the first place. Feature compilation involving planimetric objects (2D) and digital terrain models (3D) will be integrated in order to provide Digital Elevation Models (DEM) as the main interest of the topographic mapping activity. By doing this research, incorporating the optimum amount of GCPs in the UAV photo data processing will increase the accuracy along with its high resolution in 5 cm Ground Sampling Distance (GSD). Finally this result will be used as the benchmark for alternative geospatial data acquisition in the future in which it can support

  1. On the Phenomenology of an Accelerated Large-Scale Universe

    Directory of Open Access Journals (Sweden)

    Martiros Khurshudyan

    2016-10-01

    Full Text Available In this review paper, several new results towards the explanation of the accelerated expansion of the large-scale universe is discussed. On the other hand, inflation is the early-time accelerated era and the universe is symmetric in the sense of accelerated expansion. The accelerated expansion of is one of the long standing problems in modern cosmology, and physics in general. There are several well defined approaches to solve this problem. One of them is an assumption concerning the existence of dark energy in recent universe. It is believed that dark energy is responsible for antigravity, while dark matter has gravitational nature and is responsible, in general, for structure formation. A different approach is an appropriate modification of general relativity including, for instance, f ( R and f ( T theories of gravity. On the other hand, attempts to build theories of quantum gravity and assumptions about existence of extra dimensions, possible variability of the gravitational constant and the speed of the light (among others, provide interesting modifications of general relativity applicable to problems of modern cosmology, too. In particular, here two groups of cosmological models are discussed. In the first group the problem of the accelerated expansion of large-scale universe is discussed involving a new idea, named the varying ghost dark energy. On the other hand, the second group contains cosmological models addressed to the same problem involving either new parameterizations of the equation of state parameter of dark energy (like varying polytropic gas, or nonlinear interactions between dark energy and dark matter. Moreover, for cosmological models involving varying ghost dark energy, massless particle creation in appropriate radiation dominated universe (when the background dynamics is due to general relativity is demonstrated as well. Exploring the nature of the accelerated expansion of the large-scale universe involving generalized

  2. Multitree Algorithms for Large-Scale Astrostatistics

    Science.gov (United States)

    March, William B.; Ozakin, Arkadas; Lee, Dongryeol; Riegel, Ryan; Gray, Alexander G.

    2012-03-01

    this number every week, resulting in billions of objects. At such scales, even linear-time analysis operations present challenges, particularly since statistical analyses are inherently interactive processes, requiring that computations complete within some reasonable human attention span. The quadratic (or worse) runtimes of straightforward implementations become quickly unbearable. Examples of applications. These analysis subroutines occur ubiquitously in astrostatistical work. We list just a few examples. The need to cross-match objects across different catalogs has led to various algorithms, which at some point perform an AllNN computation. 2-point and higher-order spatial correlations for the basis of spatial statistics, and are utilized in astronomy to compare the spatial structures of two datasets, such as an observed sample and a theoretical sample, for example, forming the basis for two-sample hypothesis testing. Friends-of-friends clustering is often used to identify halos in data from astrophysical simulations. Minimum spanning tree properties have also been proposed as statistics of large-scale structure. Comparison of the distributions of different kinds of objects requires accurate density estimation, for which KDE is the overall statistical method of choice. The prediction of redshifts from optical data requires accurate regression, for which kernel regression is a powerful method. The identification of objects of various types in astronomy, such as stars versus galaxies, requires accurate classification, for which KDA is a powerful method. Overview. In this chapter, we will briefly sketch the main ideas behind recent fast algorithms which achieve, for example, linear runtimes for pairwise-distance problems, or similarly dramatic reductions in computational growth. In some cases, the runtime orders for these algorithms are mathematically provable statements, while in others we have only conjectures backed by experimental observations for the time being

  3. Clumps in large scale relativistic jets

    CERN Document Server

    Tavecchio, F; Celotti, A

    2003-01-01

    The relatively intense X-ray emission from large scale (tens to hundreds kpc) jets discovered with Chandra likely implies that jets (at least in powerful quasars) are still relativistic at that distances from the active nucleus. In this case the emission is due to Compton scattering off seed photons provided by the Cosmic Microwave Background, and this on one hand permits to have magnetic fields close to equipartition with the emitting particles, and on the other hand minimizes the requirements about the total power carried by the jet. The emission comes from compact (kpc scale) knots, and we here investigate what we can predict about the possible emission between the bright knots. This is motivated by the fact that bulk relativistic motion makes Compton scattering off the CMB photons efficient even when electrons are cold or mildly relativistic in the comoving frame. This implies relatively long cooling times, dominated by adiabatic losses. Therefore the relativistically moving plasma can emit, by Compton sc...

  4. Large-Scale Astrophysical Visualization on Smartphones

    Science.gov (United States)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  5. Accelerated large-scale multiple sequence alignment

    Directory of Open Access Journals (Sweden)

    Lloyd Scott

    2011-12-01

    Full Text Available Abstract Background Multiple sequence alignment (MSA is a fundamental analysis method used in bioinformatics and many comparative genomic applications. Prior MSA acceleration attempts with reconfigurable computing have only addressed the first stage of progressive alignment and consequently exhibit performance limitations according to Amdahl's Law. This work is the first known to accelerate the third stage of progressive alignment on reconfigurable hardware. Results We reduce subgroups of aligned sequences into discrete profiles before they are pairwise aligned on the accelerator. Using an FPGA accelerator, an overall speedup of up to 150 has been demonstrated on a large data set when compared to a 2.4 GHz Core2 processor. Conclusions Our parallel algorithm and architecture accelerates large-scale MSA with reconfigurable computing and allows researchers to solve the larger problems that confront biologists today. Program source is available from http://dna.cs.byu.edu/msa/.

  6. Large-Scale Tides in General Relativity

    CERN Document Server

    Ip, Hiu Yan

    2016-01-01

    Density perturbations in cosmology, i.e. spherically symmetric adiabatic perturbations of a Friedmann-Lema\\^itre-Robertson-Walker (FLRW) spacetime, are locally exactly equivalent to a different FLRW solution, as long as their wavelength is much larger than the sound horizon of all fluid components. This fact is known as the "separate universe" paradigm. However, no such relation is known for anisotropic adiabatic perturbations, which correspond to an FLRW spacetime with large-scale tidal fields. Here, we provide a closed, fully relativistic set of evolutionary equations for the nonlinear evolution of such modes, based on the conformal Fermi (CFC) frame. We show explicitly that the tidal effects are encoded by the Weyl tensor, and are hence entirely different from an anisotropic Bianchi I spacetime, where the anisotropy is sourced by the Ricci tensor. In order to close the system, certain higher derivative terms have to be dropped. We show that this approximation is equivalent to the local tidal approximation ...

  7. Large scale water lens for solar concentration.

    Science.gov (United States)

    Mondol, A S; Vogel, B; Bastian, G

    2015-06-01

    Properties of large scale water lenses for solar concentration were investigated. These lenses were built from readily available materials, normal tap water and hyper-elastic linear low density polyethylene foil. Exposed to sunlight, the focal lengths and light intensities in the focal spot were measured and calculated. Their optical properties were modeled with a raytracing software based on the lens shape. We have achieved a good match of experimental and theoretical data by considering wavelength dependent concentration factor, absorption and focal length. The change in light concentration as a function of water volume was examined via the resulting load on the foil and the corresponding change of shape. The latter was extracted from images and modeled by a finite element simulation.

  8. Grid sensitivity capability for large scale structures

    Science.gov (United States)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  9. Large-scale parametric survival analysis.

    Science.gov (United States)

    Mittal, Sushil; Madigan, David; Cheng, Jerry Q; Burd, Randall S

    2013-10-15

    Survival analysis has been a topic of active statistical research in the past few decades with applications spread across several areas. Traditional applications usually consider data with only a small numbers of predictors with a few hundreds or thousands of observations. Recent advances in data acquisition techniques and computation power have led to considerable interest in analyzing very-high-dimensional data where the number of predictor variables and the number of observations range between 10(4) and 10(6). In this paper, we present a tool for performing large-scale regularized parametric survival analysis using a variant of the cyclic coordinate descent method. Through our experiments on two real data sets, we show that application of regularized models to high-dimensional data avoids overfitting and can provide improved predictive performance and calibration over corresponding low-dimensional models.

  10. Constructing sites on a large scale

    DEFF Research Database (Denmark)

    Braae, Ellen Marie; Tietjen, Anne

    2011-01-01

    for setting the design brief in a large scale urban landscape in Norway, the Jaeren region around the city of Stavanger. In this paper, we first outline the methodological challenges and then present and discuss the proposed method based on our teaching experiences. On this basis, we discuss aspects...... within the development of our urban landscapes. At the same time, urban and landscape designers are confronted with new methodological problems. Within a strategic transformation perspective, the formulation of the design problem or brief becomes an integrated part of the design process. This paper...... discusses new design (education) methods based on a relational concept of urban sites and design processes. Within this logic site survey is not simply a pre-design activity nor is it a question of comprehensive analysis. Site survey is an integrated part of the design process. By means of active site...

  11. Supporting large-scale computational science

    Energy Technology Data Exchange (ETDEWEB)

    Musick, R

    1998-10-01

    A study has been carried out to determine the feasibility of using commercial database management systems (DBMSs) to support large-scale computational science. Conventional wisdom in the past has been that DBMSs are too slow for such data. Several events over the past few years have muddied the clarity of this mindset: 1. 2. 3. 4. Several commercial DBMS systems have demonstrated storage and ad-hoc quer access to Terabyte data sets. Several large-scale science teams, such as EOSDIS [NAS91], high energy physics [MM97] and human genome [Kin93] have adopted (or make frequent use of) commercial DBMS systems as the central part of their data management scheme. Several major DBMS vendors have introduced their first object-relational products (ORDBMSs), which have the potential to support large, array-oriented data. In some cases, performance is a moot issue. This is true in particular if the performance of legacy applications is not reduced while new, albeit slow, capabilities are added to the system. The basic assessment is still that DBMSs do not scale to large computational data. However, many of the reasons have changed, and there is an expiration date attached to that prognosis. This document expands on this conclusion, identifies the advantages and disadvantages of various commercial approaches, and describes the studies carried out in exploring this area. The document is meant to be brief, technical and informative, rather than a motivational pitch. The conclusions within are very likely to become outdated within the next 5-7 years, as market forces will have a significant impact on the state of the art in scientific data management over the next decade.

  12. Large-scale sequential quadratic programming algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  13. Analysis using large-scale ringing data

    Directory of Open Access Journals (Sweden)

    Baillie, S. R.

    2004-06-01

    survival and recruitment estimates from the French CES scheme to assess the relative contributions of survival and recruitment to overall population changes. He develops a novel approach to modelling survival rates from such multi–site data by using within–year recaptures to provide a covariate of between–year recapture rates. This provided parsimonious models of variation in recapture probabilities between sites and years. The approach provides promising results for the four species investigated and can potentially be extended to similar data from other CES/MAPS schemes. The final paper by Blandine Doligez, David Thomson and Arie van Noordwijk (Doligez et al., 2004 illustrates how large-scale studies of population dynamics can be important for evaluating the effects of conservation measures. Their study is concerned with the reintroduction of White Stork populations to the Netherlands where a re–introduction programme started in 1969 had resulted in a breeding population of 396 pairs by 2000. They demonstrate the need to consider a wide range of models in order to account for potential age, time, cohort and “trap–happiness” effects. As the data are based on resightings such trap–happiness must reflect some form of heterogeneity in resighting probabilities. Perhaps surprisingly, the provision of supplementary food did not influence survival, but it may havehad an indirect effect via the alteration of migratory behaviour. Spatially explicit modelling of data gathered at many sites inevitably results in starting models with very large numbers of parameters. The problem is often complicated further by having relatively sparse data at each site, even where the total amount of data gathered is very large. Both Julliard (2004 and Doligez et al. (2004 give explicit examples of problems caused by needing to handle very large numbers of parameters and show how they overcame them for their particular data sets. Such problems involve both the choice of appropriate

  14. Foundational perspectives on causality in large-scale brain networks.

    Science.gov (United States)

    Mannino, Michael; Bressler, Steven L

    2015-12-01

    likelihood that a change in the activity of one neuronal population affects the activity in another. We argue that these measures access the inherently probabilistic nature of causal influences in the brain, and are thus better suited for large-scale brain network analysis than are DC-based measures. Our work is consistent with recent advances in the philosophical study of probabilistic causality, which originated from inherent conceptual problems with deterministic regularity theories. It also resonates with concepts of stochasticity that were involved in establishing modern physics. In summary, we argue that probabilistic causality is a conceptually appropriate foundation for describing neural causality in the brain.

  15. Foundational perspectives on causality in large-scale brain networks

    Science.gov (United States)

    Mannino, Michael; Bressler, Steven L.

    2015-12-01

    likelihood that a change in the activity of one neuronal population affects the activity in another. We argue that these measures access the inherently probabilistic nature of causal influences in the brain, and are thus better suited for large-scale brain network analysis than are DC-based measures. Our work is consistent with recent advances in the philosophical study of probabilistic causality, which originated from inherent conceptual problems with deterministic regularity theories. It also resonates with concepts of stochasticity that were involved in establishing modern physics. In summary, we argue that probabilistic causality is a conceptually appropriate foundation for describing neural causality in the brain.

  16. Gravitational redshifts from large-scale structure

    CERN Document Server

    Croft, Rupert A C

    2013-01-01

    The recent measurement of the gravitational redshifts of galaxies in galaxy clusters by Wojtak et al. has opened a new observational window on dark matter and modified gravity. By stacking clusters this determination effectively used the line of sight distortion of the cross-correlation function of massive galaxies and lower mass galaxies to estimate the gravitational redshift profile of clusters out to 4 Mpc/h. Here we use a halo model of clustering to predict the distortion due to gravitational redshifts of the cross-correlation function on scales from 1 - 100 Mpc/h. We compare our predictions to simulations and use the simulations to make mock catalogues relevant to current and future galaxy redshift surveys. Without formulating an optimal estimator, we find that the full BOSS survey should be able to detect gravitational redshifts from large-scale structure at the ~4 sigma level. Upcoming redshift surveys will greatly increase the number of galaxies useable in such studies and the BigBOSS and Euclid exper...

  17. Large scale probabilistic available bandwidth estimation

    CERN Document Server

    Thouin, Frederic; Rabbat, Michael

    2010-01-01

    The common utilization-based definition of available bandwidth and many of the existing tools to estimate it suffer from several important weaknesses: i) most tools report a point estimate of average available bandwidth over a measurement interval and do not provide a confidence interval; ii) the commonly adopted models used to relate the available bandwidth metric to the measured data are invalid in almost all practical scenarios; iii) existing tools do not scale well and are not suited to the task of multi-path estimation in large-scale networks; iv) almost all tools use ad-hoc techniques to address measurement noise; and v) tools do not provide enough flexibility in terms of accuracy, overhead, latency and reliability to adapt to the requirements of various applications. In this paper we propose a new definition for available bandwidth and a novel framework that addresses these issues. We define probabilistic available bandwidth (PAB) as the largest input rate at which we can send a traffic flow along a pa...

  18. Large-scale tides in general relativity

    Science.gov (United States)

    Ip, Hiu Yan; Schmidt, Fabian

    2017-02-01

    Density perturbations in cosmology, i.e. spherically symmetric adiabatic perturbations of a Friedmann-Lemaȋtre-Robertson-Walker (FLRW) spacetime, are locally exactly equivalent to a different FLRW solution, as long as their wavelength is much larger than the sound horizon of all fluid components. This fact is known as the "separate universe" paradigm. However, no such relation is known for anisotropic adiabatic perturbations, which correspond to an FLRW spacetime with large-scale tidal fields. Here, we provide a closed, fully relativistic set of evolutionary equations for the nonlinear evolution of such modes, based on the conformal Fermi (CFC) frame. We show explicitly that the tidal effects are encoded by the Weyl tensor, and are hence entirely different from an anisotropic Bianchi I spacetime, where the anisotropy is sourced by the Ricci tensor. In order to close the system, certain higher derivative terms have to be dropped. We show that this approximation is equivalent to the local tidal approximation of Hui and Bertschinger [1]. We also show that this very simple set of equations matches the exact evolution of the density field at second order, but fails at third and higher order. This provides a useful, easy-to-use framework for computing the fully relativistic growth of structure at second order.

  19. Large scale digital atlases in neuroscience

    Science.gov (United States)

    Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.

    2014-03-01

    Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.

  20. Food appropriation through large scale land acquisitions

    Science.gov (United States)

    Rulli, Maria Cristina; D'Odorico, Paolo

    2014-05-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300-550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190-370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations.

  1. Cold flows and large scale tides

    Science.gov (United States)

    van de Weygaert, R.; Hoffman, Y.

    1999-01-01

    Within the context of the general cosmological setting it has remained puzzling that the local Universe is a relatively cold environment, in the sense of small-scale peculiar velocities being relatively small. Indeed, it has since long figured as an important argument for the Universe having a low Ω, or if the Universe were to have a high Ω for the existence of a substantial bias between the galaxy and the matter distribution. Here we investigate the dynamical impact of neighbouring matter concentrations on local small-scale characteristics of cosmic flows. While regions where huge nearby matter clumps represent a dominating component in the local dynamics and kinematics may experience a faster collapse on behalf of the corresponding tidal influence, the latter will also slow down or even prevent a thorough mixing and virialization of the collapsing region. By means of N-body simulations starting from constrained realizations of regions of modest density surrounded by more pronounced massive structures, we have explored the extent to which the large scale tidal fields may indeed suppress the `heating' of the small-scale cosmic velocities. Amongst others we quantify the resulting cosmic flows through the cosmic Mach number. This allows us to draw conclusions about the validity of estimates of global cosmological parameters from local cosmic phenomena and the necessity to take into account the structure and distribution of mass in the local Universe.

  2. Large scale mechanical metamaterials as seismic shields

    Science.gov (United States)

    Miniaci, Marco; Krushynska, Anastasiia; Bosia, Federico; Pugno, Nicola M.

    2016-08-01

    Earthquakes represent one of the most catastrophic natural events affecting mankind. At present, a universally accepted risk mitigation strategy for seismic events remains to be proposed. Most approaches are based on vibration isolation of structures rather than on the remote shielding of incoming waves. In this work, we propose a novel approach to the problem and discuss the feasibility of a passive isolation strategy for seismic waves based on large-scale mechanical metamaterials, including for the first time numerical analysis of both surface and guided waves, soil dissipation effects, and adopting a full 3D simulations. The study focuses on realistic structures that can be effective in frequency ranges of interest for seismic waves, and optimal design criteria are provided, exploring different metamaterial configurations, combining phononic crystals and locally resonant structures and different ranges of mechanical properties. Dispersion analysis and full-scale 3D transient wave transmission simulations are carried out on finite size systems to assess the seismic wave amplitude attenuation in realistic conditions. Results reveal that both surface and bulk seismic waves can be considerably attenuated, making this strategy viable for the protection of civil structures against seismic risk. The proposed remote shielding approach could open up new perspectives in the field of seismology and in related areas of low-frequency vibration damping or blast protection.

  3. Large-scale autostereoscopic outdoor display

    Science.gov (United States)

    Reitterer, Jörg; Fidler, Franz; Saint Julien-Wallsee, Ferdinand; Schmid, Gerhard; Gartner, Wolfgang; Leeb, Walter; Schmid, Ulrich

    2013-03-01

    State-of-the-art autostereoscopic displays are often limited in size, effective brightness, number of 3D viewing zones, and maximum 3D viewing distances, all of which are mandatory requirements for large-scale outdoor displays. Conventional autostereoscopic indoor concepts like lenticular lenses or parallax barriers cannot simply be adapted for these screens due to the inherent loss of effective resolution and brightness, which would reduce both image quality and sunlight readability. We have developed a modular autostereoscopic multi-view laser display concept with sunlight readable effective brightness, theoretically up to several thousand 3D viewing zones, and maximum 3D viewing distances of up to 60 meters. For proof-of-concept purposes a prototype display with two pixels was realized. Due to various manufacturing tolerances each individual pixel has slightly different optical properties, and hence the 3D image quality of the display has to be calculated stochastically. In this paper we present the corresponding stochastic model, we evaluate the simulation and measurement results of the prototype display, and we calculate the achievable autostereoscopic image quality to be expected for our concept.

  4. Brief Mental Training Reorganizes Large-Scale Brain Networks

    Science.gov (United States)

    Tang, Yi-Yuan; Tang, Yan; Tang, Rongxiang; Lewis-Peacock, Jarrod A.

    2017-01-01

    Emerging evidences have shown that one form of mental training—mindfulness meditation, can improve attention, emotion regulation and cognitive performance through changing brain activity and structural connectivity. However, whether and how the short-term mindfulness meditation alters large-scale brain networks are not well understood. Here, we applied a novel data-driven technique, the multivariate pattern analysis (MVPA) to resting-state fMRI (rsfMRI) data to identify changes in brain activity patterns and assess the neural mechanisms induced by a brief mindfulness training—integrative body–mind training (IBMT), which was previously reported in our series of randomized studies. Whole brain rsfMRI was performed on an undergraduate group who received 2 weeks of IBMT with 30 min per session (5 h training in total). Classifiers were trained on measures of functional connectivity in this fMRI data, and they were able to reliably differentiate (with 72% accuracy) patterns of connectivity from before vs. after the IBMT training. After training, an increase in positive functional connections (60 connections) were detected, primarily involving bilateral superior/middle occipital gyrus, bilateral frontale operculum, bilateral superior temporal gyrus, right superior temporal pole, bilateral insula, caudate and cerebellum. These results suggest that brief mental training alters the functional connectivity of large-scale brain networks at rest that may involve a portion of the neural circuitry supporting attention, cognitive and affective processing, awareness and sensory integration and reward processing.

  5. Developing Large-Scale Bayesian Networks by Composition

    Data.gov (United States)

    National Aeronautics and Space Administration — In this paper, we investigate the use of Bayesian networks to construct large-scale diagnostic systems. In particular, we consider the development of large-scale...

  6. Distributed large-scale dimensional metrology new insights

    CERN Document Server

    Franceschini, Fiorenzo; Maisano, Domenico

    2011-01-01

    Focuses on the latest insights into and challenges of distributed large scale dimensional metrology Enables practitioners to study distributed large scale dimensional metrology independently Includes specific examples of the development of new system prototypes

  7. Sensitivity technologies for large scale simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias (Rice University, Houston, TX); Wilcox, Lucas C. (Brown University, Providence, RI); Hill, Judith C. (Carnegie Mellon University, Pittsburgh, PA); Ghattas, Omar (Carnegie Mellon University, Pittsburgh, PA); Berggren, Martin Olof (University of UppSala, Sweden); Akcelik, Volkan (Carnegie Mellon University, Pittsburgh, PA); Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  8. Sensitivity technologies for large scale simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias (Rice University, Houston, TX); Wilcox, Lucas C. (Brown University, Providence, RI); Hill, Judith C. (Carnegie Mellon University, Pittsburgh, PA); Ghattas, Omar (Carnegie Mellon University, Pittsburgh, PA); Berggren, Martin Olof (University of UppSala, Sweden); Akcelik, Volkan (Carnegie Mellon University, Pittsburgh, PA); Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  9. Large Scale Flame Spread Environmental Characterization Testing

    Science.gov (United States)

    Clayman, Lauren K.; Olson, Sandra L.; Gokoghi, Suleyman A.; Brooker, John E.; Ferkul, Paul V.; Kacher, Henry F.

    2013-01-01

    Under the Advanced Exploration Systems (AES) Spacecraft Fire Safety Demonstration Project (SFSDP), as a risk mitigation activity in support of the development of a large-scale fire demonstration experiment in microgravity, flame-spread tests were conducted in normal gravity on thin, cellulose-based fuels in a sealed chamber. The primary objective of the tests was to measure pressure rise in a chamber as sample material, burning direction (upward/downward), total heat release, heat release rate, and heat loss mechanisms were varied between tests. A Design of Experiments (DOE) method was imposed to produce an array of tests from a fixed set of constraints and a coupled response model was developed. Supplementary tests were run without experimental design to additionally vary select parameters such as initial chamber pressure. The starting chamber pressure for each test was set below atmospheric to prevent chamber overpressure. Bottom ignition, or upward propagating burns, produced rapid acceleratory turbulent flame spread. Pressure rise in the chamber increases as the amount of fuel burned increases mainly because of the larger amount of heat generation and, to a much smaller extent, due to the increase in gaseous number of moles. Top ignition, or downward propagating burns, produced a steady flame spread with a very small flat flame across the burning edge. Steady-state pressure is achieved during downward flame spread as the pressure rises and plateaus. This indicates that the heat generation by the flame matches the heat loss to surroundings during the longer, slower downward burns. One heat loss mechanism included mounting a heat exchanger directly above the burning sample in the path of the plume to act as a heat sink and more efficiently dissipate the heat due to the combustion event. This proved an effective means for chamber overpressure mitigation for those tests producing the most total heat release and thusly was determined to be a feasible mitigation

  10. GPU-based large-scale visualization

    KAUST Repository

    Hadwiger, Markus

    2013-11-19

    Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size. The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections. You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization. We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous

  11. Synchronization of coupled large-scale Boolean networks

    Energy Technology Data Exchange (ETDEWEB)

    Li, Fangfei, E-mail: li-fangfei@163.com [Department of Mathematics, East China University of Science and Technology, No. 130, Meilong Road, Shanghai, Shanghai 200237 (China)

    2014-03-15

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  12. Synchronization of coupled large-scale Boolean networks

    Science.gov (United States)

    Li, Fangfei

    2014-03-01

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  13. Can wide consultation help with setting priorities for large-scale biodiversity monitoring programs?

    Directory of Open Access Journals (Sweden)

    Frédéric Boivin

    Full Text Available Climate and other global change phenomena affecting biodiversity require monitoring to track ecosystem changes and guide policy and management actions. Designing a biodiversity monitoring program is a difficult task that requires making decisions that often lack consensus due to budgetary constrains. As monitoring programs require long-term investment, they also require strong and continuing support from all interested parties. As such, stakeholder consultation is key to identify priorities and make sound design decisions that have as much support as possible. Here, we present the results of a consultation conducted to serve as an aid for designing a large-scale biodiversity monitoring program for the province of Québec (Canada. The consultation took the form of a survey with 13 discrete choices involving tradeoffs in respect to design priorities and 10 demographic questions (e.g., age, profession. The survey was sent to thousands of individuals having expected interests and knowledge about biodiversity and was completed by 621 participants. Overall, consensuses were few and it appeared difficult to create a design fulfilling the priorities of the majority. Most participants wanted 1 a monitoring design covering the entire territory and focusing on natural habitats; 2 a focus on species related to ecosystem services, on threatened and on invasive species. The only demographic characteristic that was related to the type of prioritization was the declared level of knowledge in biodiversity (null to high, but even then the influence was quite small.

  14. Lightweight computational steering of very large scale molecular dynamics simulations

    Energy Technology Data Exchange (ETDEWEB)

    Beazley, D.M. [Univ. of Utah, Salt Lake City, UT (United States). Dept. of Computer Science; Lomdahl, P.S. [Los Alamos National Lab., NM (United States)

    1996-09-01

    We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show how this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages.

  15. Pro website development and operations streamlining DevOps for large-scale websites

    CERN Document Server

    Sacks, Matthew

    2012-01-01

    Pro Website Development and Operations gives you the experience you need to create and operate a large-scale production website. Large-scale websites have their own unique set of problems regarding their design-problems that can get worse when agile methodologies are adopted for rapid results. Managing large-scale websites, deploying applications, and ensuring they are performing well often requires a full scale team involving the development and operations sides of the company-two departments that don't always see eye to eye. When departments struggle with each other, it adds unnecessary comp

  16. Extending large-scale forest inventories to assess urban forests.

    Science.gov (United States)

    Corona, Piermaria; Agrimi, Mariagrazia; Baffetta, Federica; Barbati, Anna; Chiriacò, Maria Vincenza; Fattorini, Lorenzo; Pompei, Enrico; Valentini, Riccardo; Mattioli, Walter

    2012-03-01

    Urban areas are continuously expanding today, extending their influence on an increasingly large proportion of woods and trees located in or nearby urban and urbanizing areas, the so-called urban forests. Although these forests have the potential for significantly improving the quality the urban environment and the well-being of the urban population, data to quantify the extent and characteristics of urban forests are still lacking or fragmentary on a large scale. In this regard, an expansion of the domain of multipurpose forest inventories like National Forest Inventories (NFIs) towards urban forests would be required. To this end, it would be convenient to exploit the same sampling scheme applied in NFIs to assess the basic features of urban forests. This paper considers approximately unbiased estimators of abundance and coverage of urban forests, together with estimators of the corresponding variances, which can be achieved from the first phase of most large-scale forest inventories. A simulation study is carried out in order to check the performance of the considered estimators under various situations involving the spatial distribution of the urban forests over the study area. An application is worked out on the data from the Italian NFI.

  17. Large-scale assembly of colloidal particles

    Science.gov (United States)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  18. Population generation for large-scale simulation

    Science.gov (United States)

    Hannon, Andrew C.; King, Gary; Morrison, Clayton; Galstyan, Aram; Cohen, Paul

    2005-05-01

    Computer simulation is used to research phenomena ranging from the structure of the space-time continuum to population genetics and future combat.1-3 Multi-agent simulations in particular are now commonplace in many fields.4, 5 By modeling populations whose complex behavior emerges from individual interactions, these simulations help to answer questions about effects where closed form solutions are difficult to solve or impossible to derive.6 To be useful, simulations must accurately model the relevant aspects of the underlying domain. In multi-agent simulation, this means that the modeling must include both the agents and their relationships. Typically, each agent can be modeled as a set of attributes drawn from various distributions (e.g., height, morale, intelligence and so forth). Though these can interact - for example, agent height is related to agent weight - they are usually independent. Modeling relations between agents, on the other hand, adds a new layer of complexity, and tools from graph theory and social network analysis are finding increasing application.7, 8 Recognizing the role and proper use of these techniques, however, remains the subject of ongoing research. We recently encountered these complexities while building large scale social simulations.9-11 One of these, the Hats Simulator, is designed to be a lightweight proxy for intelligence analysis problems. Hats models a "society in a box" consisting of many simple agents, called hats. Hats gets its name from the classic spaghetti western, in which the heroes and villains are known by the color of the hats they wear. The Hats society also has its heroes and villains, but the challenge is to identify which color hat they should be wearing based on how they behave. There are three types of hats: benign hats, known terrorists, and covert terrorists. Covert terrorists look just like benign hats but act like terrorists. Population structure can make covert hat identification significantly more

  19. Optimal Multilevel Control for Large Scale Interconnected Systems

    Directory of Open Access Journals (Sweden)

    Ahmed M. A. Alomar,

    2014-04-01

    Full Text Available A mathematical model of the finishing mill as an example of a large scale interconnected dynamical system is represented. First the system response due to disturbance only is presented. Then,the control technique applied to the finishing hot rolling steel mill is the optimal multilevel control using state feedback. An optimal controller is developed based on the integrated system model, but due to the complexity of the controllers and tremendous computational efforts involved, a multilevel technique is used in designing and implementing the controllers .The basis of the multilevel technique is described and a computational algorithm is discussed for the control of the finishing mill system . To reduce the mass storage , memory requirements and the computational time of the processor, a sub-optimal multilevel technique is applied to design the controllers of the finishing mill . Comparison between these controllers and conclusion is presented.

  20. Bonus algorithm for large scale stochastic nonlinear programming problems

    CERN Document Server

    Diwekar, Urmila

    2015-01-01

    This book presents the details of the BONUS algorithm and its real world applications in areas like sensor placement in large scale drinking water networks, sensor placement in advanced power systems, water management in power systems, and capacity expansion of energy systems. A generalized method for stochastic nonlinear programming based on a sampling based approach for uncertainty analysis and statistical reweighting to obtain probability information is demonstrated in this book. Stochastic optimization problems are difficult to solve since they involve dealing with optimization and uncertainty loops. There are two fundamental approaches used to solve such problems. The first being the decomposition techniques and the second method identifies problem specific structures and transforms the problem into a deterministic nonlinear programming problem. These techniques have significant limitations on either the objective function type or the underlying distributions for the uncertain variables. Moreover, these ...

  1. Policy Driven Development: Flexible Policy Insertion for Large Scale Systems.

    Science.gov (United States)

    Demchak, Barry; Krüger, Ingolf

    2012-07-01

    The success of a software system depends critically on how well it reflects and adapts to stakeholder requirements. Traditional development methods often frustrate stakeholders by creating long latencies between requirement articulation and system deployment, especially in large scale systems. One source of latency is the maintenance of policy decisions encoded directly into system workflows at development time, including those involving access control and feature set selection. We created the Policy Driven Development (PDD) methodology to address these development latencies by enabling the flexible injection of decision points into existing workflows at runtime, thus enabling policy composition that integrates requirements furnished by multiple, oblivious stakeholder groups. Using PDD, we designed and implemented a production cyberinfrastructure that demonstrates policy and workflow injection that quickly implements stakeholder requirements, including features not contemplated in the original system design. PDD provides a path to quickly and cost effectively evolve such applications over a long lifetime.

  2. Thermal power generation projects ``Large Scale Solar Heating``; EU-Thermie-Projekte ``Large Scale Solar Heating``

    Energy Technology Data Exchange (ETDEWEB)

    Kuebler, R.; Fisch, M.N. [Steinbeis-Transferzentrum Energie-, Gebaeude- und Solartechnik, Stuttgart (Germany)

    1998-12-31

    The aim of this project is the preparation of the ``Large-Scale Solar Heating`` programme for an Europe-wide development of subject technology. The following demonstration programme was judged well by the experts but was not immediately (1996) accepted for financial subsidies. In November 1997 the EU-commission provided 1,5 million ECU which allowed the realisation of an updated project proposal. By mid 1997 a small project was approved, that had been requested under the lead of Chalmes Industriteteknik (CIT) in Sweden and is mainly carried out for the transfer of technology. (orig.) [Deutsch] Ziel dieses Vorhabens ist die Vorbereitung eines Schwerpunktprogramms `Large Scale Solar Heating`, mit dem die Technologie europaweit weiterentwickelt werden sollte. Das daraus entwickelte Demonstrationsprogramm wurde von den Gutachtern positiv bewertet, konnte jedoch nicht auf Anhieb (1996) in die Foerderung aufgenommen werden. Im November 1997 wurden von der EU-Kommission dann kurzfristig noch 1,5 Mio ECU an Foerderung bewilligt, mit denen ein aktualisierter Projektvorschlag realisiert werden kann. Bereits Mitte 1997 wurde ein kleineres Vorhaben bewilligt, das unter Federfuehrung von Chalmers Industriteknik (CIT) in Schweden beantragt worden war und das vor allem dem Technologietransfer dient. (orig.)

  3. Large-Scale Organizational Performance Improvement.

    Science.gov (United States)

    Pilotto, Rudy; Young, Jonathan O'Donnell

    1999-01-01

    Describes the steps involved in a performance improvement program in the context of a large multinational corporation. Highlights include a training program for managers that explained performance improvement; performance matrices; divisionwide implementation, including strategic planning; organizationwide training of all personnel; and the…

  4. Simulation of large scale pedestrian flow

    OpenAIRE

    Dridi, Mohamed H.

    2015-01-01

    Pedestrian simulation is a challenging and fruitful application area for particle simulation, especially in places where many people are gathered (e.g. the Hajj, sports and concert events). Traffic and transportation domains take advantage of this simulation as well. Here the design and implementation involves interesting issues and particle-based modelling allows for the reproduction of pedestrian behaviour to a level of detail beyond pure collision-free locomotion. In this dissertation we w...

  5. Large-Scale Filaments: Newtonian versus Modified Dynamics

    Science.gov (United States)

    Milgrom, Mordehai

    1997-03-01

    Eisenstein, Loeb, & Turner (ELT) have recently proposed a method for estimating the dynamical masses of large-scale filaments, whereby the filament is modeled by an infinite, axisymmetric, isothermal, self-gravitating, radially virialized cylinder, for which ELT derive a global relation between the (constant) velocity dispersion and the total line density. We show that the model assumptions of ELT can be relaxed materially: an exact relation between the rms velocity and the line density can be derived for any infinite cylinder (not necessarily axisymmetric) with an arbitrary constituent distribution function (so isothermality need not be assumed). We also consider the same problem in the context of the modified Newtonian dynamics (MOND). After we compare the scaling properties in the two theories, we study two idealized MOND model filaments, one with assumptions similar to those of ELT, which we can only solve numerically, and another, which we solve in closed form. A preliminary application to the same segment of the Perseus-Pisces filament treated by ELT gives MOND M/L estimates of order 10(M/L)⊙, compared with the Newtonian value M/L ~ 450(H0/100 km s-1 Mpc-1)(M/L)⊙ that ELT find. In spite of the large uncertainties still besetting the analysis, this instance of MOND application is of particular interest because (1) objects of this geometry have not been dealt with before; (2) it pertains to large-scale structure; and (3) the typical accelerations involved are the lowest so far encountered in a semivirialized system--only a few percent of the critical MOND acceleration--leading to a large predicted mass discrepancy.

  6. Large-Scale Pattern Discovery in Music

    Science.gov (United States)

    Bertin-Mahieux, Thierry

    This work focuses on extracting patterns in musical data from very large collections. The problem is split in two parts. First, we build such a large collection, the Million Song Dataset, to provide researchers access to commercial-size datasets. Second, we use this collection to study cover song recognition which involves finding harmonic patterns from audio features. Regarding the Million Song Dataset, we detail how we built the original collection from an online API, and how we encouraged other organizations to participate in the project. The result is the largest research dataset with heterogeneous sources of data available to music technology researchers. We demonstrate some of its potential and discuss the impact it already has on the field. On cover song recognition, we must revisit the existing literature since there are no publicly available results on a dataset of more than a few thousand entries. We present two solutions to tackle the problem, one using a hashing method, and one using a higher-level feature computed from the chromagram (dubbed the 2DFTM). We further investigate the 2DFTM since it has potential to be a relevant representation for any task involving audio harmonic content. Finally, we discuss the future of the dataset and the hope of seeing more work making use of the different sources of data that are linked in the Million Song Dataset. Regarding cover songs, we explain how this might be a first step towards defining a harmonic manifold of music, a space where harmonic similarities between songs would be more apparent.

  7. Large-scale screens of metagenomic libraries.

    Science.gov (United States)

    Pham, Vinh D; Palden, Tsultrim; DeLong, Edward F

    2007-01-01

    Metagenomic libraries archive large fragments of contiguous genomic sequences from microorganisms without requiring prior cultivation. Generating a streamlined procedure for creating and screening metagenomic libraries is therefore useful for efficient high-throughput investigations into the genetic and metabolic properties of uncultured microbial assemblages. Here, key protocols are presented on video, which we propose is the most useful format for accurately describing a long process that alternately depends on robotic instrumentation and (human) manual interventions. First, we employed robotics to spot library clones onto high-density macroarray membranes, each of which can contain duplicate colonies from twenty-four 384-well library plates. Automation is essential for this procedure not only for accuracy and speed, but also due to the miniaturization of scale required to fit the large number of library clones into highly dense spatial arrangements. Once generated, we next demonstrated how the macroarray membranes can be screened for genes of interest using modified versions of standard protocols for probe labeling, membrane hybridization, and signal detection. We complemented the visual demonstration of these procedures with detailed written descriptions of the steps involved and the materials required, all of which are available online alongside the video.

  8. Electrodialysis system for large-scale enantiomer separation

    NARCIS (Netherlands)

    Ent, van der E.M.; Thielen, T.P.H.; Cohen Stuart, M.A.; Padt, van der A.; Keurentjes, J.T.F.

    2001-01-01

    In contrast to analytical methods, the range of technologies currently applied for large-scale enantiomer separations is not very extensive. Therefore, a new system has been developed for large-scale enantiomer separations that can be regarded as the scale-up of a capillary electrophoresis system. I

  9. Safeguards instruments for Large-Scale Reprocessing Plants

    Energy Technology Data Exchange (ETDEWEB)

    Hakkila, E.A. [Los Alamos National Lab., NM (United States); Case, R.S.; Sonnier, C. [Sandia National Labs., Albuquerque, NM (United States)

    1993-06-01

    Between 1987 and 1992 a multi-national forum known as LASCAR (Large Scale Reprocessing Plant Safeguards) met to assist the IAEA in development of effective and efficient safeguards for large-scale reprocessing plants. The US provided considerable input for safeguards approaches and instrumentation. This paper reviews and updates instrumentation of importance in measuring plutonium and uranium in these facilities.

  10. Prospects for large scale electricity storage in Denmark

    DEFF Research Database (Denmark)

    Krog Ekman, Claus; Jensen, Søren Højgaard

    2010-01-01

    In a future power systems with additional wind power capacity there will be an increased need for large scale power management as well as reliable balancing and reserve capabilities. Different technologies for large scale electricity storage provide solutions to the different challenges arising w...

  11. Higher Education Teachers' Descriptions of Their Own Learning: A Large-Scale Study of Finnish Universities of Applied Sciences

    Science.gov (United States)

    Töytäri, Aija; Piirainen, Arja; Tynjälä, Päivi; Vanhanen-Nuutinen, Liisa; Mäki, Kimmo; Ilves, Vesa

    2016-01-01

    In this large-scale study, higher education teachers' descriptions of their own learning were examined with qualitative analysis involving application of principles of phenomenographic research. This study is unique: it is unusual to use large-scale data in qualitative studies. The data were collected through an e-mail survey sent to 5960 teachers…

  12. Modelling large-scale evacuation of music festivals

    Directory of Open Access Journals (Sweden)

    E. Ronchi

    2016-05-01

    Full Text Available This paper explores the use of multi-agent continuous evacuation modelling for representing large-scale evacuation scenarios at music festivals. A 65,000 people capacity music festival area was simulated using the model Pathfinder. Three evacuation scenarios were developed in order to explore the capabilities of evacuation modelling during such incidents, namely (1 a preventive evacuation of a section of the festival area containing approximately 15,000 people due to a fire breaking out on a ship, (2 an escalating scenario involving the total evacuation of the entire festival area (65,000 people due to a bomb threat, and (3 a cascading scenario involving the total evacuation of the entire festival area (65,000 people due to the threat of an explosion caused by a ship engine overheating. This study suggests that the analysis of the people-evacuation time curves produced by evacuation models, coupled with a visual analysis of the simulated evacuation scenarios, allows for the identification of the main factors affecting the evacuation process (e.g., delay times, overcrowding at exits in relation to exit widths, etc. and potential measures that could improve safety.

  13. Distribution probability of large-scale landslides in central Nepal

    Science.gov (United States)

    Timilsina, Manita; Bhandary, Netra P.; Dahal, Ranjan Kumar; Yatabe, Ryuichi

    2014-12-01

    Large-scale landslides in the Himalaya are defined as huge, deep-seated landslide masses that occurred in the geological past. They are widely distributed in the Nepal Himalaya. The steep topography and high local relief provide high potential for such failures, whereas the dynamic geology and adverse climatic conditions play a key role in the occurrence and reactivation of such landslides. The major geoscientific problems related with such large-scale landslides are 1) difficulties in their identification and delineation, 2) sources of small-scale failures, and 3) reactivation. Only a few scientific publications have been published concerning large-scale landslides in Nepal. In this context, the identification and quantification of large-scale landslides and their potential distribution are crucial. Therefore, this study explores the distribution of large-scale landslides in the Lesser Himalaya. It provides simple guidelines to identify large-scale landslides based on their typical characteristics and using a 3D schematic diagram. Based on the spatial distribution of landslides, geomorphological/geological parameters and logistic regression, an equation of large-scale landslide distribution is also derived. The equation is validated by applying it to another area. For the new area, the area under the receiver operating curve of the landslide distribution probability in the new area is 0.699, and a distribution probability value could explain > 65% of existing landslides. Therefore, the regression equation can be applied to areas of the Lesser Himalaya of central Nepal with similar geological and geomorphological conditions.

  14. Large scale and big data processing and management

    CERN Document Server

    Sakr, Sherif

    2014-01-01

    Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments.The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-bas

  15. Probabilistic cartography of the large-scale structure

    CERN Document Server

    Leclercq, Florent; Lavaux, Guilhem; Wandelt, Benjamin

    2015-01-01

    The BORG algorithm is an inference engine that derives the initial conditions given a cosmological model and galaxy survey data, and produces physical reconstructions of the underlying large-scale structure by assimilating the data into the model. We present the application of BORG to real galaxy catalogs and describe the primordial and late-time large-scale structure in the considered volumes. We then show how these results can be used for building various probabilistic maps of the large-scale structure, with rigorous propagation of uncertainties. In particular, we study dynamic cosmic web elements and secondary effects in the cosmic microwave background.

  16. Reorganizing Complex Network to Improve Large-Scale Multiagent Teamwork

    Directory of Open Access Journals (Sweden)

    Yang Xu

    2014-01-01

    Full Text Available Large-scale multiagent teamwork has been popular in various domains. Similar to human society infrastructure, agents only coordinate with some of the others, with a peer-to-peer complex network structure. Their organization has been proven as a key factor to influence their performance. To expedite team performance, we have analyzed that there are three key factors. First, complex network effects may be able to promote team performance. Second, coordination interactions coming from their sources are always trying to be routed to capable agents. Although they could be transferred across the network via different paths, their sources and sinks depend on the intrinsic nature of the team which is irrelevant to the network connections. In addition, the agents involved in the same plan often form a subteam and communicate with each other more frequently. Therefore, if the interactions between agents can be statistically recorded, we are able to set up an integrated network adjustment algorithm by combining the three key factors. Based on our abstracted teamwork simulations and the coordination statistics, we implemented the adaptive reorganization algorithm. The experimental results briefly support our design that the reorganized network is more capable of coordinating heterogeneous agents.

  17. Investigation of Coronal Large Scale Structures Utilizing Spartan 201 Data

    Science.gov (United States)

    Guhathakurta, Madhulika

    1998-01-01

    Two telescopes aboard Spartan 201, a small satellite has been launched from the Space Shuttles, on April 8th, 1993, September 8th, 1994, September 7th, 1995 and November 20th, 1997. The main objective of the mission was to answer some of the most fundamental unanswered questions of solar physics-What accelerates the solar wind and what heats the corona? The two telescopes are 1) Ultraviolet Coronal Spectrometer (UVCS) provided by the Smithsonian Astrophysical Observatory which uses ultraviolet emissions from neutral hydrogen and ions in the corona to determine velocities of the coronal plasma within the solar wind source region, and the temperature and density distributions of protons and 2) White Light Coronagraph (WLC) provided by NASA's Goddard Space Flight Center which measures visible light to determine the density distribution of coronal electrons within the same region. The PI has had the primary responsibility in the development and application of computer codes necessary for scientific data analysis activities, end instrument calibration for the white-light coronagraph for the entire Spartan mission. The PI was responsible for the science output from the WLC instrument. PI has also been involved in the investigation of coronal density distributions in large-scale structures by use of numerical models which are (mathematically) sufficient to reproduce the details of the observed brightness and polarized brightness distributions found in SPARTAN 201 data.

  18. Large scale photovoltaic field trials. Second technical report: monitoring phase

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2007-09-15

    This report provides an update on the Large-Scale Building Integrated Photovoltaic Field Trials (LS-BIPV FT) programme commissioned by the Department of Trade and Industry (Department for Business, Enterprise and Industry; BERR). It provides detailed profiles of the 12 projects making up this programme, which is part of the UK programme on photovoltaics and has run in parallel with the Domestic Field Trial. These field trials aim to record the experience and use the lessons learnt to raise awareness of, and confidence in, the technology and increase UK capabilities. The projects involved: the visitor centre at the Gaia Energy Centre in Cornwall; a community church hall in London; council offices in West Oxfordshire; a sports science centre at Gloucester University; the visitor centre at Cotswold Water Park; the headquarters of the Insolvency Service; a Welsh Development Agency building; an athletics centre in Birmingham; a research facility at the University of East Anglia; a primary school in Belfast; and Barnstable civic centre in Devon. The report describes the aims of the field trials, monitoring issues, performance, observations and trends, lessons learnt and the results of occupancy surveys.

  19. Large scale multiplex PCR improves pathogen detection by DNA microarrays

    Directory of Open Access Journals (Sweden)

    Krönke Martin

    2009-01-01

    Full Text Available Abstract Background Medium density DNA microchips that carry a collection of probes for a broad spectrum of pathogens, have the potential to be powerful tools for simultaneous species identification, detection of virulence factors and antimicrobial resistance determinants. However, their widespread use in microbiological diagnostics is limited by the problem of low pathogen numbers in clinical specimens revealing relatively low amounts of pathogen DNA. Results To increase the detection power of a fluorescence-based prototype-microarray designed to identify pathogenic microorganisms involved in sepsis, we propose a large scale multiplex PCR (LSplex PCR for amplification of several dozens of gene-segments of 9 pathogenic species. This protocol employs a large set of primer pairs, potentially able to amplify 800 different gene segments that correspond to the capture probes spotted on the microarray. The LSplex protocol is shown to selectively amplify only the gene segments corresponding to the specific pathogen present in the analyte. Application of LSplex increases the microarray detection of target templates by a factor of 100 to 1000. Conclusion Our data provide a proof of principle for the improvement of detection of pathogen DNA by microarray hybridization by using LSplex PCR.

  20. Large Scale Applications of HTS in New Zealand

    Science.gov (United States)

    Wimbush, Stuart C.

    New Zealand has one of the longest-running and most consistently funded (relative to GDP) programmes in high temperature superconductor (HTS) development and application worldwide. As a consequence, it has a sustained breadth of involvement in HTS technology development stretching from the materials discovery right through to burgeoning commercial exploitation. This review paper outlines the present large scale projects of the research team at the newly-established Robinson Research Institute of Victoria University of Wellington. These include the construction and grid-based testing of a three-phase 1 MVA 2G HTS distribution transformer utilizing Roebel cable for its high-current secondary windings and the development of a cryogen-free conduction-cooled 1.5 T YBCO-based human extremity magnetic resonance imaging system. Ongoing activities supporting applications development such as low-temperature full-current characterization of commercial superconducting wires and the implementation of inductive flux-pump technologies for efficient brushless coil excitation in superconducting magnets and rotating machines are also described.

  1. Large-Scale Spray Releases: Additional Aerosol Test Results

    Energy Technology Data Exchange (ETDEWEB)

    Daniel, Richard C.; Gauglitz, Phillip A.; Burns, Carolyn A.; Fountain, Matthew S.; Shimskey, Rick W.; Billing, Justin M.; Bontha, Jagannadha R.; Kurath, Dean E.; Jenks, Jeromy WJ; MacFarlan, Paul J.; Mahoney, Lenna A.

    2013-08-01

    One of the events postulated in the hazard analysis for the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak event involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids that behave as a Newtonian fluid. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and in processing facilities across the DOE complex. To expand the data set upon which the WTP accident and safety analyses were based, an aerosol spray leak testing program was conducted by Pacific Northwest National Laboratory (PNNL). PNNL’s test program addressed two key technical areas to improve the WTP methodology (Larson and Allen 2010). The first technical area was to quantify the role of slurry particles in small breaches where slurry particles may plug the hole and prevent high-pressure sprays. The results from an effort to address this first technical area can be found in Mahoney et al. (2012a). The second technical area was to determine aerosol droplet size distribution and total droplet volume from prototypic breaches and fluids, including sprays from larger breaches and sprays of slurries for which literature data are mostly absent. To address the second technical area, the testing program collected aerosol generation data at two scales, commonly referred to as small-scale and large-scale testing. The small-scale testing and resultant data are described in Mahoney et al. (2012b), and the large-scale testing and resultant data are presented in Schonewill et al. (2012). In tests at both scales, simulants were used

  2. Ferroelectric opening switches for large-scale pulsed power drivers.

    Energy Technology Data Exchange (ETDEWEB)

    Brennecka, Geoffrey L.; Rudys, Joseph Matthew; Reed, Kim Warren; Pena, Gary Edward; Tuttle, Bruce Andrew; Glover, Steven Frank

    2009-11-01

    Fast electrical energy storage or Voltage-Driven Technology (VDT) has dominated fast, high-voltage pulsed power systems for the past six decades. Fast magnetic energy storage or Current-Driven Technology (CDT) is characterized by 10,000 X higher energy density than VDT and has a great number of other substantial advantages, but it has all but been neglected for all of these decades. The uniform explanation for neglect of CDT technology is invariably that the industry has never been able to make an effective opening switch, which is essential for the use of CDT. Most approaches to opening switches have involved plasma of one sort or another. On a large scale, gaseous plasmas have been used as a conductor to bridge the switch electrodes that provides an opening function when the current wave front propagates through to the output end of the plasma and fully magnetizes the plasma - this is called a Plasma Opening Switch (POS). Opening can be triggered in a POS using a magnetic field to push the plasma out of the A-K gap - this is called a Magnetically Controlled Plasma Opening Switch (MCPOS). On a small scale, depletion of electron plasmas in semiconductor devices is used to affect opening switch behavior, but these devices are relatively low voltage and low current compared to the hundreds of kilo-volts and tens of kilo-amperes of interest to pulsed power. This work is an investigation into an entirely new approach to opening switch technology that utilizes new materials in new ways. The new materials are Ferroelectrics and using them as an opening switch is a stark contrast to their traditional applications in optics and transducer applications. Emphasis is on use of high performance ferroelectrics with the objective of developing an opening switch that would be suitable for large scale pulsed power applications. Over the course of exploring this new ground, we have discovered new behaviors and properties of these materials that were here to fore unknown. Some of

  3. Food security through large scale investments in agriculture

    Science.gov (United States)

    Rulli, M.; D'Odorico, P.

    2013-12-01

    Most of the human appropriation of freshwater resources is for food production. There is some concern that in the near future the finite freshwater resources available on Earth might not be sufficient to meet the increasing human demand for agricultural products. In the late 1700s Malthus argued that in the long run the humanity would not have enough resources to feed itself. Malthus' analysis, however, did not account for the emergence of technological innovations that could increase the rate of food production. The modern and contemporary history has seen at least three major technological advances that have increased humans' access to food, namely, the industrial revolution, the green revolution, and the intensification of global trade. Here we argue that a fourth revolution has just started to happen. It involves foreign direct investments in agriculture, which intensify the crop yields of potentially highly productive agricultural lands by introducing the use of more modern technologies. The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions for commercial farming will bring the technology required to close the existing yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of verified land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with large scale land acquisitions. We

  4. Leashes and Lies: Navigating the Colonial Tensions of Institutional Ethics of Research Involving Indigenous Peoples in Canada

    Directory of Open Access Journals (Sweden)

    Martha L. Stiegman

    2015-06-01

    Full Text Available Ethical standards of conduct in research undertaken at Canadian universities involving humans has been guided by the three federal research funding agencies through the Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans (or TCPS for short since 1998. The statement was revised for the first time in 2010 and is now commonly referred to as the TCPS2, which includes an entire chapter (Chapter 9 devoted to the subject of research involving First Nations, Inuit, and Métis peoples of Canada. While the establishment of TCPS2 is an important initial step on the long road towards decolonizing Indigenous research within the academy, our frustrations—which echo those of many colleagues struggling to do research “in a good way” (see, for example, Ball & Janyst 2008; Bull, 2008; Guta et al., 2010 within this framework—highlight the urgent work that remains to be done if university-based researchers are to be enabled by establishment channels to do “ethical” research with Aboriginal peoples. In our (and others’ experience to date, we seem to have been able to do research in a good way, despite, not because of the TCPS2 (see Castleden et al., 2012. The disconnect between the stated goals of TCPS2, and the challenges researchers face when attempting to navigate how individual, rotating members of REBs interpret the TPCS2 and operate within this framework, begs the question: Wherein lies the disconnect? A number of scholars are currently researching this divide (see for example see Guta et al. 2010; Flicker & Worthington, 2011; and Guta et al., 2013. In this editorial, we offer an anecdote to illustrate our experience regarding some of these tensions and then offer reflections about what might need to change for the next iteration of the TCPS.

  5. PetroChina to Expand Dushanzi Refinery on Large Scale

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    @@ A large-scale expansion project for PetroChina Dushanzi Petrochemical Company has been given the green light, a move which will make it one of the largest refineries and petrochemical complexes in the country.

  6. Learning networks for sustainable, large-scale improvement.

    Science.gov (United States)

    McCannon, C Joseph; Perla, Rocco J

    2009-05-01

    Large-scale improvement efforts known as improvement networks offer structured opportunities for exchange of information and insights into the adaptation of clinical protocols to a variety of settings.

  7. Personalized Opportunistic Computing for CMS at Large Scale

    CERN Document Server

    CERN. Geneva

    2015-01-01

    **Douglas Thain** is an Associate Professor of Computer Science and Engineering at the University of Notre Dame, where he designs large scale distributed computing systems to power the needs of advanced science and...

  8. Some perspective on the Large Scale Scientific Computation Research

    Institute of Scientific and Technical Information of China (English)

    DU Qiang

    2004-01-01

    @@ The "Large Scale Scientific Computation (LSSC) Research"project is one of the State Major Basic Research projects funded by the Chinese Ministry of Science and Technology in the field ofinformation science and technology.

  9. Some perspective on the Large Scale Scientific Computation Research

    Institute of Scientific and Technical Information of China (English)

    DU; Qiang

    2004-01-01

    The "Large Scale Scientific Computation (LSSC) Research"project is one of the State Major Basic Research projects funded by the Chinese Ministry of Science and Technology in the field ofinformation science and technology.……

  10. Needs, opportunities, and options for large scale systems research

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  11. Balancing modern Power System with large scale of wind power

    OpenAIRE

    Basit, Abdul; Altin, Müfit; Hansen, Anca Daniela; Sørensen, Poul Ejnar

    2014-01-01

    Power system operators must ensure robust, secure and reliable power system operation even with a large scale integration of wind power. Electricity generated from the intermittent wind in large propor-tion may impact on the control of power system balance and thus deviations in the power system frequency in small or islanded power systems or tie line power flows in interconnected power systems. Therefore, the large scale integration of wind power into the power system strongly concerns the s...

  12. Efficient algorithms for collaborative decision making for large scale settings

    DEFF Research Database (Denmark)

    Assent, Ira

    2011-01-01

    Collaborative decision making is a successful approach in settings where data analysis and querying can be done interactively. In large scale systems with huge data volumes or many users, collaboration is often hindered by impractical runtimes. Existing work on improving collaboration focuses...... to bring about more effective and more efficient retrieval systems that support the users' decision making process. We sketch promising research directions for more efficient algorithms for collaborative decision making, especially for large scale systems....

  13. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...... are presented as the small-scale model underpredicts the overtopping discharge....

  14. A study of MLFMA for large-scale scattering problems

    Science.gov (United States)

    Hastriter, Michael Larkin

    This research is centered in computational electromagnetics with a focus on solving large-scale problems accurately in a timely fashion using first principle physics. Error control of the translation operator in 3-D is shown. A parallel implementation of the multilevel fast multipole algorithm (MLFMA) was studied as far as parallel efficiency and scaling. The large-scale scattering program (LSSP), based on the ScaleME library, was used to solve ultra-large-scale problems including a 200lambda sphere with 20 million unknowns. As these large-scale problems were solved, techniques were developed to accurately estimate the memory requirements. Careful memory management is needed in order to solve these massive problems. The study of MLFMA in large-scale problems revealed significant errors that stemmed from inconsistencies in constants used by different parts of the algorithm. These were fixed to produce the most accurate data possible for large-scale surface scattering problems. Data was calculated on a missile-like target using both high frequency methods and MLFMA. This data was compared and analyzed to determine possible strategies to increase data acquisition speed and accuracy through multiple computation method hybridization.

  15. Entrepreneurial governance: challenges of large-scale property-led urban regeneration projects

    NARCIS (Netherlands)

    Tasan-Kok, M.T.

    2010-01-01

    Large-scale urban regeneration projects become highly complex as they involve multiple actors with different expectations. In general, the implementation of such projects entails building governance regimes at the city or regional level, but this often means forging partnerships between public and p

  16. Output regulation of large-scale hydraulic networks with minimal steady state power consumption

    NARCIS (Netherlands)

    Jensen, Tom Nørgaard; Wisniewski, Rafał; De Persis, Claudio; Kallesøe, Carsten Skovmose

    2014-01-01

    An industrial case study involving a large-scale hydraulic network is examined. The hydraulic network underlies a district heating system, with an arbitrary number of end-users. The problem of output regulation is addressed along with a optimization criterion for the control. The fact that the syste

  17. Parallel Computational Fluid Dynamics 2007 : Implementations and Experiences on Large Scale and Grid Computing

    CERN Document Server

    2009-01-01

    At the 19th Annual Conference on Parallel Computational Fluid Dynamics held in Antalya, Turkey, in May 2007, the most recent developments and implementations of large-scale and grid computing were presented. This book, comprised of the invited and selected papers of this conference, details those advances, which are of particular interest to CFD and CFD-related communities. It also offers the results related to applications of various scientific and engineering problems involving flows and flow-related topics. Intended for CFD researchers and graduate students, this book is a state-of-the-art presentation of the relevant methodology and implementation techniques of large-scale computing.

  18. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey J.

    2012-03-02

    IThe National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,000 users and hosting some 550 projects that involve nearly 700 codes for a wide variety of scientific disciplines. In addition to large-scale computing resources NERSC provides critical staff support and expertise to help scientists make the most efficient use of these resources to advance the scientific mission of the Office of Science. In May 2011, NERSC, DOE’s Office of Advanced Scientific Computing Research (ASCR) and DOE’s Office of Nuclear Physics (NP) held a workshop to characterize HPC requirements for NP research over the next three to five years. The effort is part of NERSC’s continuing involvement in anticipating future user needs and deploying necessary resources to meet these demands. The workshop revealed several key requirements, in addition to achieving its goal of characterizing NP computing. The key requirements include: 1. Larger allocations of computational resources at NERSC; 2. Visualization and analytics support; and 3. Support at NERSC for the unique needs of experimental nuclear physicists. This report expands upon these key points and adds others. The results are based upon representative samples, called “case studies,” of the needs of science teams within NP. The case studies were prepared by NP workshop participants and contain a summary of science goals, methods of solution, current and future computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, “multi-core” environment that is expected to dominate HPC architectures over the next few years. The report also includes a section with NERSC responses to the workshop findings. NERSC has many initiatives already underway that address key workshop findings and all of the action items are aligned with NERSC strategic plans.

  19. Advances in Large-Scale Solar Heating and Long Term Storage in Denmark

    DEFF Research Database (Denmark)

    Heller, Alfred

    2000-01-01

    According to (the) information from the European Large-Scale Solar Heating Network, (See http://www.hvac.chalmers.se/cshp/), the area of installed solar collectors for large-scale application is in Europe, approximately 8 mill m2, corresponding to about 4000 MW thermal power. The 11 plants...... of the total 51 plants are equipped with long-term storage. In Denmark, 7 plants are installed, comprising of approx. 18,000-m2 collector area with new plants planned. The development of these plants and the involved technologies will be presented in this paper, with a focus on the improvements for Danish...... Central Solar Heating Plants, servicing District Heating and related developments in large-scale thermal storage. Central solar heating today is a mature and economic realistic solution for district heating based on a renewable source. The cost for solar collectors has decreased by nearly ¼ during...

  20. Wildfires, mountain pine beetle and large-scale climate in Northern North America.

    Science.gov (United States)

    Macias Fauria, M.; Johnson, E. A.

    2009-05-01

    Research on the interactions between biosphere and atmosphere and ocean/atmosphere dynamics, concretely on the coupling between ecological processes and large-scale climate, is presented in two studies in Northern North America: the occurrence of large lightning wildfires and the forest area affected by mountain pine beetle (Dendroctonus ponderosae, MPB). In both cases, large-scale climatic patterns such as the Pacific Decadal Oscillation (PDO) and the Arctic Oscillation (AO) operate as low and low and high frequency frameworks, respectively, that control the occurrence, duration and spatial correlation over large areas of key local weather variables which affect specific ecological processes. Warm PDO phases tend to produce persistent (more than 10 days long) positive mid-troposphere anomalies (blocking highs) over western Canada and Alaska. Likewise, positive (negative) AO configurations increase the frequency of blocking highs at mid (high) latitudes of the Northern Hemisphere. Under these conditions, lack of precipitation and prevailing warm air meridional flow rapidly dry fuel over large areas and increase fire hazard. The spatiotemporal patterns of occurrence of large lightning wildfire in Canada and Alaska for 1959-1999 were largely explained by the action and possible interaction of AO and PDO, the AO being more influential over Eastern Canada, the PDO over Western Canada and Alaska. Changes in the dynamics of the PDO are linked to the occurrence of cold winter temperatures in British Columbia (BC), Western Canada. Reduced frequency of cold events during warm PDO winters is consistent with a northward-displaced polar jet stream inhibiting the outflow of cold Arctic air over BC. Likewise, the AO influences the occurrence of winter cold spells in the area. PDO, and to a lesser degree AO, were strongly related to MPB synchrony in BC during 1959-2002, operating through the control of the frequency of extreme cold winter temperatures that affect MPB larvae

  1. Large-scale-vortex dynamos in planar rotating convection

    CERN Document Server

    Guervilly, Céline; Jones, Chris A

    2016-01-01

    Several recent studies have demonstrated how large-scale vortices may arise spontaneously in rotating planar convection. Here we examine the dynamo properties of such flows in rotating Boussinesq convection. For moderate values of the magnetic Reynolds number ($100 \\lesssim Rm \\lesssim 550$, with $Rm$ based on the box depth and the convective velocity), a large-scale (i.e. system-size) magnetic field is generated. The amplitude of the magnetic energy oscillates in time, out of phase with the oscillating amplitude of the large-scale vortex. The dynamo mechanism relies on those components of the flow that have length scales lying between that of the large-scale vortex and the typical convective cell size; smaller-scale flows are not required. The large-scale vortex plays a crucial role in the magnetic induction despite being essentially two-dimensional. For larger magnetic Reynolds numbers, the dynamo is small scale, with a magnetic energy spectrum that peaks at the scale of the convective cells. In this case, ...

  2. The Internet As a Large-Scale Complex System

    Science.gov (United States)

    Park, Kihong; Willinger, Walter

    2005-06-01

    The Internet may be viewed as a "complex system" with diverse features and many components that can give rise to unexpected emergent phenomena, revealing much about its own engineering. This book brings together chapter contributions from a workshop held at the Santa Fe Institute in March 2001. This volume captures a snapshot of some features of the Internet that may be fruitfully approached using a complex systems perspective, meaning using interdisciplinary tools and methods to tackle the subject area. The Internet penetrates the socioeconomic fabric of everyday life; a broader and deeper grasp of the Internet may be needed to meet the challenges facing the future. The resulting empirical data have already proven to be invaluable for gaining novel insights into the network's spatio-temporal dynamics, and can be expected to become even more important when tryin to explain the Internet's complex and emergent behavior in terms of elementary networking-based mechanisms. The discoveries of fractal or self-similar network traffic traces, power-law behavior in network topology and World Wide Web connectivity are instances of unsuspected, emergent system traits. Another important factor at the heart of fair, efficient, and stable sharing of network resources is user behavior. Network systems, when habited by selfish or greedy users, take on the traits of a noncooperative multi-party game, and their stability and efficiency are integral to understanding the overall system and its dynamics. Lastly, fault-tolerance and robustness of large-scale network systems can exhibit spatial and temporal correlations whose effective analysis and management may benefit from rescaling techniques applied in certain physical and biological systems. The present book will bring together several of the leading workers involved in the analysis of complex systems with the future development of the Internet.

  3. 大规模风电接入下含大用户直购电的电力系统调度模型研究%An Analysis Model of Power System With Large-scale Wind Power and Transaction Mode of Direct Power Purchase by Large Consumers Involved in System Scheduling

    Institute of Scientific and Technical Information of China (English)

    张文韬; 王秀丽; 吴雄; 姚力

    2015-01-01

    ABSTRACT:Large-scale integration of wind power makes the peak regulation become more and more complex. At the same time, the influence on power system by carrying out the transaction mode of direct power purchase by large consumers is still urgently needed to research because this transaction mode is a key link of innovation in electric power market to introduce competition in the retail side. Therefore, depending on the attention degree of wind power the decision makers pay, two system scheduling models with this transaction mode involved in were set up. Those were comprehensive benefits priority scheduling model and wind power priority scheduling model. These two models both aimed to maximum the expected profit of power generation system considering the randomness of the wind power output. Then auto-regressive and moving average (ARMA) model and Monte Carlo technique were taken to simulate a large number of wind power scenarios, and scenario reduction algorithm is developed to get a finite number of representative scenarios. Also, peak and valley time price was considered and mixed integer programming technique were used to obtain the operational aspects of every unit and the arrangement of the power directly purchased by the large consumers in all scenarios. What’s more, the economic and running benefits of this transaction mode could also be calculated using above methods. Test system is used to verify the rationality and feasibility of the proposed models and algorithm.%大规模风电的接入使得电力系统的调峰问题愈发复杂。与此同时,作为电力市场改革进程中开放售电侧竞争的重要环节,开展大用户直购电对系统调度与运行的影响仍亟待研究。对此,依据决策者对于风电重视程度的不同建立两种包含大用户直购电的调度模型,分别为风电效益权衡模型和风电优先调度模型。模型均以最大化发电系统运行期望效益为目标,

  4. Large-scale networks in engineering and life sciences

    CERN Document Server

    Findeisen, Rolf; Flockerzi, Dietrich; Reichl, Udo; Sundmacher, Kai

    2014-01-01

    This edited volume provides insights into and tools for the modeling, analysis, optimization, and control of large-scale networks in the life sciences and in engineering. Large-scale systems are often the result of networked interactions between a large number of subsystems, and their analysis and control are becoming increasingly important. The chapters of this book present the basic concepts and theoretical foundations of network theory and discuss its applications in different scientific areas such as biochemical reactions, chemical production processes, systems biology, electrical circuits, and mobile agents. The aim is to identify common concepts, to understand the underlying mathematical ideas, and to inspire discussions across the borders of the various disciplines.  The book originates from the interdisciplinary summer school “Large Scale Networks in Engineering and Life Sciences” hosted by the International Max Planck Research School Magdeburg, September 26-30, 2011, and will therefore be of int...

  5. Large Scale Magnetohydrodynamic Dynamos from Cylindrical Differentially Rotating Flows

    CERN Document Server

    Ebrahimi, F

    2015-01-01

    For cylindrical differentially rotating plasmas threaded with a uniform vertical magnetic field, we study large-scale magnetic field generation from finite amplitude perturbations using analytic theory and direct numerical simulations. Analytically, we impose helical fluctuations, a seed field, and a background flow and use quasi-linear theory for a single mode. The predicted large-scale field growth agrees with numerical simulations in which the magnetorotational instability (MRI) arises naturally. The vertically and azimuthally averaged toroidal field is generated by a fluctuation-induced EMF that depends on differential rotation. Given fluctuations, the method also predicts large-scale field growth for MRI-stable rotation profiles and flows with no rotation but shear.

  6. Optimization of large-scale fabrication of dielectric elastomer transducers

    DEFF Research Database (Denmark)

    Hassouneh, Suzan Sager

    Dielectric elastomers (DEs) have gained substantial ground in many different applications, such as wave energy harvesting, valves and loudspeakers. For DE technology to be commercially viable, it is necessary that any large-scale production operation is nondestructive, efficient and cheap. Danfoss...... Polypower A/S employs a large-scale process for manufacturing DE films with one-sided corrugated surfaces. The DEs are manufactured by coating an elastomer mixture to a corrugated carrier web, thereby imprinting the corrugations onto the elastomer. The corrugated elastomer is then sputtered with metal...... strength. Other issues may also arise, depending on how the elements are assembled. This thesis is based on optimising the large-scale manufacture of DE transducers. The hot embossing technology is used to impart corrugations onto elastomer film surfaces. Embossing, which was performed for samples...

  7. Large Scale Anomalies of the Cosmic Microwave Background with Planck

    DEFF Research Database (Denmark)

    Frejsel, Anne Mette

    This thesis focuses on the large scale anomalies of the Cosmic Microwave Background (CMB) and their possible origins. The investigations consist of two main parts. The first part is on statistical tests of the CMB, and the consistency of both maps and power spectrum. We find that the Planck data...... is very consistent, while the WMAP 9 year release appears more contaminated by non-CMB residuals than the 7 year release. The second part is concerned with the anomalies of the CMB from two approaches. One is based on an extended inflationary model as the origin of one specific large scale anomaly, namely....... Here we find evidence that the Planck CMB maps contain residual radiation in the loop areas, which can be linked to some of the large scale CMB anomalies: the point-parity asymmetry, the alignment of quadrupole and octupole and the dipolemodulation....

  8. A relativistic signature in large-scale structure

    Science.gov (United States)

    Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David

    2016-09-01

    In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.

  9. Balancing modern Power System with large scale of wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Altin, Müfit; Hansen, Anca Daniela

    2014-01-01

    Power system operators must ensure robust, secure and reliable power system operation even with a large scale integration of wind power. Electricity generated from the intermittent wind in large propor-tion may impact on the control of power system balance and thus deviations in the power system...... to be analysed with improved analytical tools and techniques. This paper proposes techniques for the active power balance control in future power systems with the large scale wind power integration, where power balancing model provides the hour-ahead dispatch plan with reduced planning horizon and the real time...... frequency in small or islanded power systems or tie line power flows in interconnected power systems. Therefore, the large scale integration of wind power into the power system strongly concerns the secure and stable grid operation. To ensure the stable power system operation, the evolving power system has...

  10. Transport of Large Scale Poloidal Flux in Black Hole Accretion

    CERN Document Server

    Beckwith, Kris; Krolik, Julian H

    2009-01-01

    We perform a global, three-dimensional GRMHD simulation of an accretion torus embedded in a large scale vertical magnetic field orbiting a Schwarzschild black hole. This simulation investigates how a large scale vertical field evolves within a turbulent accretion disk and whether global magnetic field configurations suitable for launching jets and winds can develop. We identify a ``coronal mechanism'' of magnetic flux motion, which dominates the global flux evolution. In this coronal mechanism, magnetic stresses driven by orbital shear create large-scale half-loops of magnetic field that stretch radially inward and then reconnect, leading to discontinuous jumps in the location of magnetic flux. This mechanism is supplemented by a smaller amount of flux advection in the accretion flow proper. Because the black hole in this case does not rotate, the magnetic flux on the horizon determines the mean magnetic field strength in the funnel around the disk axis; this field strength is regulated by a combination of th...

  11. Human pescadillo induces large-scale chromatin unfolding

    Institute of Scientific and Technical Information of China (English)

    ZHANG Hao; FANG Yan; HUANG Cuifen; YANG Xiao; YE Qinong

    2005-01-01

    The human pescadillo gene encodes a protein with a BRCT domain. Pescadillo plays an important role in DNA synthesis, cell proliferation and transformation. Since BRCT domains have been shown to induce chromatin large-scale unfolding, we tested the role of Pescadillo in regulation of large-scale chromatin unfolding. To this end, we isolated the coding region of Pescadillo from human mammary MCF10A cells. Compared with the reported sequence, the isolated Pescadillo contains in-frame deletion from amino acid 580 to 582. Targeting the Pescadillo to an amplified, lac operator-containing chromosome region in the mammalian genome results in large-scale chromatin decondensation. This unfolding activity maps to the BRCT domain of Pescadillo. These data provide a new clue to understanding the vital role of Pescadillo.

  12. [Issues of large scale tissue culture of medicinal plant].

    Science.gov (United States)

    Lv, Dong-Mei; Yuan, Yuan; Zhan, Zhi-Lai

    2014-09-01

    In order to increase the yield and quality of the medicinal plant and enhance the competitive power of industry of medicinal plant in our country, this paper analyzed the status, problem and countermeasure of the tissue culture of medicinal plant on large scale. Although the biotechnology is one of the most efficient and promising means in production of medicinal plant, it still has problems such as stability of the material, safety of the transgenic medicinal plant and optimization of cultured condition. Establishing perfect evaluation system according to the characteristic of the medicinal plant is the key measures to assure the sustainable development of the tissue culture of medicinal plant on large scale.

  13. Fatigue Analysis of Large-scale Wind turbine

    Directory of Open Access Journals (Sweden)

    Zhu Yongli

    2017-01-01

    Full Text Available The paper does research on top flange fatigue damage of large-scale wind turbine generator. It establishes finite element model of top flange connection system with finite element analysis software MSC. Marc/Mentat, analyzes its fatigue strain, implements load simulation of flange fatigue working condition with Bladed software, acquires flange fatigue load spectrum with rain-flow counting method, finally, it realizes fatigue analysis of top flange with fatigue analysis software MSC. Fatigue and Palmgren-Miner linear cumulative damage theory. The analysis result indicates that its result provides new thinking for flange fatigue analysis of large-scale wind turbine generator, and possesses some practical engineering value.

  14. Generation Expansion Planning Considering Integrating Large-scale Wind Generation

    DEFF Research Database (Denmark)

    Zhang, Chunyu; Ding, Yi; Østergaard, Jacob

    2013-01-01

    Generation expansion planning (GEP) is the problem of finding the optimal strategy to plan the Construction of new generation while satisfying technical and economical constraints. In the deregulated and competitive environment, large-scale integration of wind generation (WG) in power system has...... necessitated the inclusion of more innovative and sophisticated approaches in power system investment planning. A bi-level generation expansion planning approach considering large-scale wind generation was proposed in this paper. The first phase is investment decision, while the second phase is production...

  15. Distributed chaos tuned to large scale coherent motions in turbulence

    CERN Document Server

    Bershadskii, A

    2016-01-01

    It is shown, using direct numerical simulations and laboratory experiments data, that distributed chaos is often tuned to large scale coherent motions in anisotropic inhomogeneous turbulence. The examples considered are: fully developed turbulent boundary layer (range of coherence: $14 < y^{+} < 80$), turbulent thermal convection (in a horizontal cylinder), and Cuette-Taylor flow. Two ways of the tuning have been described: one via fundamental frequency (wavenumber) and another via subharmonic (period doubling). For the second way the large scale coherent motions are a natural component of distributed chaos. In all considered cases spontaneous breaking of space translational symmetry is accompanied by reflexional symmetry breaking.

  16. Large-Scale Inverse Problems and Quantification of Uncertainty

    CERN Document Server

    Biegler, Lorenz; Ghattas, Omar

    2010-01-01

    Large-scale inverse problems and associated uncertainty quantification has become an important area of research, central to a wide range of science and engineering applications. Written by leading experts in the field, Large-scale Inverse Problems and Quantification of Uncertainty focuses on the computational methods used to analyze and simulate inverse problems. The text provides PhD students, researchers, advanced undergraduate students, and engineering practitioners with the perspectives of researchers in areas of inverse problems and data assimilation, ranging from statistics and large-sca

  17. Reliability Evaluation considering Structures of a Large Scale Wind Farm

    DEFF Research Database (Denmark)

    Shin, Je-Seok; Cha, Seung-Tae; Wu, Qiuwei

    2012-01-01

    evaluation on wind farm is necessarily required. Also, because large scale offshore wind farm has a long repair time and a high repair cost as well as a high investment cost, it is essential to take into account the economic aspect. One of methods to efficiently build and to operate wind farm is to construct......Wind energy is one of the most widely used renewable energy resources. Wind power has been connected to the grid as large scale wind farm which is made up of dozens of wind turbines, and the scale of wind farm is more increased recently. Due to intermittent and variable wind source, reliability...

  18. Optimal Dispatching of Large-scale Water Supply System

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    This paper deals with the use of optimal control techniques in large-scale water distribution networks. According to the network characteristics and actual state of the water supply system in China, the implicit model, which may be solved by utilizing the hierarchical optimization method, is established. In special, based on the analyses of the water supply system containing variable-speed pumps, a software tool has been developed successfully. The application of this model to the city of Shenyang (China) is compared to experiential strategy. The results of this study show that the developed model is a very promising optimization method to control the large-scale water supply systems.

  19. Practical Large Scale Syntheses of New Drug Candidates

    Institute of Scientific and Technical Information of China (English)

    Hui-Yin Li

    2001-01-01

    @@ This presentation will be focus on Practical large scale syntheses of lead compounds and drug candidates from three major therapeutic areas from DuPont Pharmaceuticals Research Laboratory: 1). DMP777-a selective, non-toxic, orally active human elastase inhibitor; 2). DMP754-a potent glycoprotein IIb/IIIa antagonist; 3). R-Wafarin-the pure enantiomeric form of wafarin. The key technology used for preparation these drug candidates is asymmetric hydrogenation under very mild reaction conditions, which produced very high quality final products at large scale (>99% de, >99 A% and >99 wt%). Some practical and GMP aspects of process development will be also discussed.

  20. Large-scale liquid scintillation detectors for solar neutrinos

    Energy Technology Data Exchange (ETDEWEB)

    Benziger, Jay B.; Calaprice, Frank P. [Princeton University Princeton, Princeton, NJ (United States)

    2016-04-15

    Large-scale liquid scintillation detectors are capable of providing spectral yields of the low energy solar neutrinos. These detectors require > 100 tons of liquid scintillator with high optical and radiopurity. In this paper requirements for low-energy neutrino detection by liquid scintillation are specified and the procedures to achieve low backgrounds in large-scale liquid scintillation detectors for solar neutrinos are reviewed. The designs, operations and achievements of Borexino, KamLAND and SNO+ in measuring the low-energy solar neutrino fluxes are reviewed. (orig.)

  1. Large-scale synthesis of YSZ nanopowder by Pechini method

    Indian Academy of Sciences (India)

    Morteza Hajizadeh-Oghaz; Reza Shoja Razavi; Mohammadreza Loghman Estarki

    2014-08-01

    Yttria–stabilized zirconia nanopowders were synthesized on a relatively large scale using Pechini method. In the present paper, nearly spherical yttria-stabilized zirconia nanopowders with tetragonal structure were synthesized by Pechini process from zirconium oxynitrate hexahydrate, yttrium nitrate, citric acid and ethylene glycol. The phase and structural analyses were accomplished by X-ray diffraction; morphological analysis was carried out by field emission scanning electron microscopy and transmission electron microscopy. The results revealed nearly spherical yttria–stabilized zirconia powder with tetragonal crystal structure and chemical purity of 99.1% by inductively coupled plasma optical emission spectroscopy on a large scale.

  2. Statistical equilibria of large scales in dissipative hydrodynamic turbulence

    CERN Document Server

    Dallas, Vassilios; Alexakis, Alexandros

    2015-01-01

    We present a numerical study of the statistical properties of three-dimensional dissipative turbulent flows at scales larger than the forcing scale. Our results indicate that the large scale flow can be described to a large degree by the truncated Euler equations with the predictions of the zero flux solutions given by absolute equilibrium theory, both for helical and non-helical flows. Thus, the functional shape of the large scale spectra can be predicted provided that scales sufficiently larger than the forcing length scale but also sufficiently smaller than the box size are examined. Deviations from the predictions of absolute equilibrium are discussed.

  3. Large-scale streaming motions and microwave background anisotropies

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Gonzalez, E.; Sanz, J.L. (Cantabria Universidad, Santander (Spain))

    1989-12-01

    The minimal microwave background radiation is calculated on each angular scale implied by the existence of large-scale streaming motions. These minimal anisotropies, due to the Sachs-Wolfe effect, are obtained for different experiments, and give quite different results from those found in previous work. They are not in conflict with present theories of galaxy formation. Upper limits are imposed on the scale at which large-scale streaming motions can occur by extrapolating results from present double-beam-switching experiments. 17 refs.

  4. The fractal octahedron network of the large scale structure

    CERN Document Server

    Battaner, E

    1998-01-01

    In a previous article, we have proposed that the large scale structure network generated by large scale magnetic fields could consist of a network of octahedra only contacting at their vertexes. Assuming such a network could arise at different scales producing a fractal geometry, we study here its properties, and in particular how a sub-octahedron network can be inserted within an octahedron of the large network. We deduce that the scale of the fractal structure would range from $\\approx$100 Mpc, i.e. the scale of the deepest surveys, down to about 10 Mpc, as other smaller scale magnetic fields were probably destroyed in the radiation dominated Universe.

  5. Practical Large Scale Syntheses of New Drug Candidates

    Institute of Scientific and Technical Information of China (English)

    Hui-Yin; Li

    2001-01-01

    This presentation will be focus on Practical large scale syntheses of lead compounds and drug candidates from three major therapeutic areas from DuPont Pharmaceuticals Research Laboratory: 1). DMP777-a selective, non-toxic, orally active human elastase inhibitor; 2). DMP754-a potent glycoprotein IIb/IIIa antagonist; 3). R-Wafarin-the pure enantiomeric form of wafarin. The key technology used for preparation these drug candidates is asymmetric hydrogenation under very mild reaction conditions, which produced very high quality final products at large scale (>99% de, >99 A% and >99 wt%). Some practical and GMP aspects of process development will be also discussed.……

  6. Fast paths in large-scale dynamic road networks

    CERN Document Server

    Nannicini, Giacomo; Barbier, Gilles; Krob, Daniel; Liberti, Leo

    2007-01-01

    Efficiently computing fast paths in large scale dynamic road networks (where dynamic traffic information is known over a part of the network) is a practical problem faced by several traffic information service providers who wish to offer a realistic fast path computation to GPS terminal enabled vehicles. The heuristic solution method we propose is based on a highway hierarchy-based shortest path algorithm for static large-scale networks; we maintain a static highway hierarchy and perform each query on the dynamically evaluated network.

  7. Large-Scale Spray Releases: Initial Aerosol Test Results

    Energy Technology Data Exchange (ETDEWEB)

    Schonewill, Philip P.; Gauglitz, Phillip A.; Bontha, Jagannadha R.; Daniel, Richard C.; Kurath, Dean E.; Adkins, Harold E.; Billing, Justin M.; Burns, Carolyn A.; Davis, James M.; Enderlin, Carl W.; Fischer, Christopher M.; Jenks, Jeromy WJ; Lukins, Craig D.; MacFarlan, Paul J.; Shutthanandan, Janani I.; Smith, Dennese M.

    2012-12-01

    One of the events postulated in the hazard analysis at the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids with Newtonian fluid behavior. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and across processing facilities in the DOE complex. Two key technical areas were identified where testing results were needed to improve the technical basis by reducing the uncertainty due to extrapolating existing literature results. The first technical need was to quantify the role of slurry particles in small breaches where the slurry particles may plug and result in substantially reduced, or even negligible, respirable fraction formed by high-pressure sprays. The second technical need was to determine the aerosol droplet size distribution and volume from prototypic breaches and fluids, specifically including sprays from larger breaches with slurries where data from the literature are scarce. To address these technical areas, small- and large-scale test stands were constructed and operated with simulants to determine aerosol release fractions and generation rates from a range of breach sizes and geometries. The properties of the simulants represented the range of properties expected in the WTP process streams and included water, sodium salt solutions, slurries containing boehmite or gibbsite, and a hazardous chemical simulant. The effect of anti-foam agents was assessed with most of the simulants. Orifices included round holes and

  8. Large Scale Meteorological Pattern of Extreme Rainfall in Indonesia

    Science.gov (United States)

    Kuswanto, Heri; Grotjahn, Richard; Rachmi, Arinda; Suhermi, Novri; Oktania, Erma; Wijaya, Yosep

    2014-05-01

    dates involving observations from multiple sites (rain gauges). The approach combines the POT (Peaks Over Threshold) with 'declustering' of the data to approximate independence based on the autocorrelation structure of each rainfall series. The cross correlation among sites is considered also to develop the event's criteria yielding a rational choice of the extreme dates given the 'spotty' nature of the intense convection. Based on the identified dates, we are developing a supporting tool for forecasting extreme rainfall based on the corresponding large-scale meteorological patterns (LSMPs). The LSMPs methodology focuses on the larger-scale patterns that the model are better able to forecast, as those larger-scale patterns create the conditions fostering the local EWE. Bootstrap resampling method is applied to highlight the key features that statistically significant with the extreme events. Grotjahn, R., and G. Faure. 2008: Composite Predictor Maps of Extraordinary Weather Events in the Sacramento California Region. Weather and Forecasting. 23: 313-335.

  9. Policy Writing as Dialogue: Drafting an Aboriginal Chapter for Canada's Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans

    Directory of Open Access Journals (Sweden)

    Jeff Reading

    2010-07-01

    Full Text Available Writing policy that applies to First Nations, Inuit and Métis peoples in Canada has become more interactive as communities and their representative organizations press for practical recognition of an Aboriginal right of self-determination. When the policy in development is aimed at supporting “respect for human dignity” as it is in the case of ethics of research involving humans, the necessity of engaging the affected population becomes central to the undertaking.

  10. Large-Scale Machine Learning for Classification and Search

    Science.gov (United States)

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  11. Participatory Design of Large-Scale Information Systems

    DEFF Research Database (Denmark)

    Simonsen, Jesper; Hertzum, Morten

    2008-01-01

    In this article we discuss how to engage in large-scale information systems development by applying a participatory design (PD) approach that acknowledges the unique situated work practices conducted by the domain experts of modern organizations. We reconstruct the iterative prototyping approach...

  12. Water Implications of Large-Scale Land Acquisitions in Ghana

    Directory of Open Access Journals (Sweden)

    Timothy Olalekan Williams

    2012-06-01

    The paper offers recommendations which can help the government to achieve its stated objective of developing a "policy framework and guidelines for large-scale land acquisitions by both local and foreign investors for biofuels that will protect the interests of investors and the welfare of Ghanaian farmers and landowners".

  13. A Large-Scale 3D Object Recognition dataset

    DEFF Research Database (Denmark)

    Sølund, Thomas; Glent Buch, Anders; Krüger, Norbert;

    2016-01-01

    This paper presents a new large scale dataset targeting evaluation of local shape descriptors and 3d object recognition algorithms. The dataset consists of point clouds and triangulated meshes from 292 physical scenes taken from 11 different views; a total of approximately 3204 views. Each...

  14. High-Throughput, Large-Scale SNP Genotyping: Bioinformatics Considerations

    OpenAIRE

    Margetic, Nino

    2004-01-01

    In order to provide a high-throughput, large-scale genotyping facility at the national level we have developed a set of inter-dependent information systems. A combination of commercial, publicly-available and in-house developed tools links a series of data repositories based both on flat files and relational databases providing an almost complete semi-automated pipeline.

  15. Chain Analysis for large-scale Communication systems

    NARCIS (Netherlands)

    Grijpink, Jan

    2010-01-01

    The chain concept is introduced to explain how large-scale information infrastructures so often fail and sometimes even backfire. Next, the assessment framework of the doctrine of Chain-computerisation and its chain analysis procedure are outlined. In this procedure chain description precedes assess

  16. Cost Overruns in Large-scale Transportation Infrastructure Projects

    DEFF Research Database (Denmark)

    Cantarelli, Chantal C; Flyvbjerg, Bent; Molin, Eric J. E

    2010-01-01

    Managing large-scale transportation infrastructure projects is difficult due to frequent misinformation about the costs which results in large cost overruns that often threaten the overall project viability. This paper investigates the explanations for cost overruns that are given in the literature...

  17. Large scale radial stability density of Hill's equation

    NARCIS (Netherlands)

    Broer, Henk; Levi, Mark; Simo, Carles

    2013-01-01

    This paper deals with large scale aspects of Hill's equation (sic) + (a + bp(t)) x = 0, where p is periodic with a fixed period. In particular, the interest is the asymptotic radial density of the stability domain in the (a, b)-plane. It turns out that this density changes discontinuously in a certa

  18. Large-Scale Innovation and Change in UK Higher Education

    Science.gov (United States)

    Brown, Stephen

    2013-01-01

    This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ…

  19. A Chain Perspective on Large-scale Number Systems

    NARCIS (Netherlands)

    Grijpink, J.H.A.M.

    2012-01-01

    As large-scale number systems gain significance in social and economic life (electronic communication, remote electronic authentication), the correct functioning and the integrity of public number systems take on crucial importance. They are needed to uniquely indicate people, objects or phenomena i

  20. Ultra-Large-Scale Systems: Scale Changes Everything

    Science.gov (United States)

    2008-03-06

    Statistical Mechanics, Complexity Networks Are Everywhere Recurring “scale free” structure • internet & yeast protein structures Analogous dynamics...Design • Design Representation and Analysis • Assimilation • Determining and Managing Requirements 43 Ultra-Large-Scale Systems Linda Northrop: March

  1. Large scale solar district heating. Evaluation, modelling and designing - Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The appendices present the following: A) Cad-drawing of the Marstal CSHP design. B) Key values - large-scale solar heating in Denmark. C) Monitoring - a system description. D) WMO-classification of pyranometers (solarimeters). E) The computer simulation model in TRNSYS. F) Selected papers from the author. (EHS)

  2. The Role of Plausible Values in Large-Scale Surveys

    Science.gov (United States)

    Wu, Margaret

    2005-01-01

    In large-scale assessment programs such as NAEP, TIMSS and PISA, students' achievement data sets provided for secondary analysts contain so-called "plausible values." Plausible values are multiple imputations of the unobservable latent achievement for each student. In this article it has been shown how plausible values are used to: (1) address…

  3. Primordial non-Gaussianity from the large scale structure

    CERN Document Server

    Desjacques, Vincent

    2010-01-01

    Primordial non-Gaussianity is a potentially powerful discriminant of the physical mechanisms that generated the cosmological fluctuations observed today. Any detection of non-Gaussianity would have profound implications for our understanding of cosmic structure formation. In this paper, we review past and current efforts in the search for primordial non-Gaussianity in the large scale structure of the Universe.

  4. Dual Decomposition for Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Vandenberghe, Lieven

    2013-01-01

    Dual decomposition is applied to power balancing of exible thermal storage units. The centralized large-scale problem is decomposed into smaller subproblems and solved locallyby each unit in the Smart Grid. Convergence is achieved by coordinating the units consumption through a negotiation...

  5. Large-scale coastal impact induced by a catastrophic storm

    DEFF Research Database (Denmark)

    Fruergaard, Mikkel; Andersen, Thorbjørn Joest; Johannessen, Peter N

    of marine sand accumulated in an aggrading-prograding shoal and on a prograding shoreface during and within 3 to 4 decades (“healing phase”) after the most destructive storm documented for the Wadden Sea. Furthermore, we show that the impact of this storm caused large-scale shoreline erosion and barrier...

  6. Temporal Variation of Large Scale Flows in the Solar Interior

    Indian Academy of Sciences (India)

    Sarbani Basu; H. M. Antia

    2000-09-01

    We attempt to detect short-term temporal variations in the rotation rate and other large scale velocity fields in the outer part of the solar convection zone using the ring diagram technique applied to Michelson Doppler Imager (MDI) data. The measured velocity field shows variations by about 10 m/s on the scale of few days.

  7. GroFi: Large-scale fiber placement research facility

    Directory of Open Access Journals (Sweden)

    Christian Krombholz

    2016-03-01

    and processes for large-scale composite components. Due to the use of coordinated and simultaneously working layup units a high exibility of the research platform is achieved. This allows the investigation of new materials, technologies and processes on both, small coupons, but also large components such as wing covers or fuselage skins.

  8. Main Achievements of Cotton Large-scale Transformation System

    Institute of Scientific and Technical Information of China (English)

    LI Fu-guang; LIU Chuan-liang; WU Zhi-xia; ZHANG Chao-jun; ZHANG Xue-yan

    2008-01-01

    @@ Cotton large-scale transformation methods system was established based on innovation of cotton transformation methods.It obtains 8000 transgenic cotton plants per year by combining Agrobacteriurn turnefaciens-mediated,pollen-tube pathway and biolistic methods together efficiently.More than 1000 transgenie lines are selected from the transgenic plants with molecular assistant breeding and conventional breeding methods.

  9. Flexibility in design of large-scale methanol plants

    Institute of Scientific and Technical Information of China (English)

    Esben Lauge Sφrensen; Helge Holm-Larsen; Haldor Topsφe A/S

    2006-01-01

    This paper presents a cost effective design for large-scale methanol production. It is demonstrated how recent technological progress can be utilised to design a methanol plant,which is inexpensive and easy to operate, while at the same time very robust towards variations in feed-stock composition and product specifications.

  10. Lessons from Large-Scale Renewable Energy Integration Studies: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Bird, L.; Milligan, M.

    2012-06-01

    In general, large-scale integration studies in Europe and the United States find that high penetrations of renewable generation are technically feasible with operational changes and increased access to transmission. This paper describes other key findings such as the need for fast markets, large balancing areas, system flexibility, and the use of advanced forecasting.

  11. Large-scale data analysis using the Wigner function

    Science.gov (United States)

    Earnshaw, R. A.; Lei, C.; Li, J.; Mugassabi, S.; Vourdas, A.

    2012-04-01

    Large-scale data are analysed using the Wigner function. It is shown that the 'frequency variable' provides important information, which is lost with other techniques. The method is applied to 'sentiment analysis' in data from social networks and also to financial data.

  12. Main Achievements of Cotton Large-scale Transformation System

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Cotton large-scale transformation methods system was established based on innovation of cotton transformation methods.It obtains 8000 transgenic cotton plants per year by combining Agrobacterium tumefaciens-mediated,pollen-tube pathway and biolistic methods together efficiently.More than

  13. Large-scale search for dark-matter axions

    Energy Technology Data Exchange (ETDEWEB)

    Hagmann, C.A., LLNL; Kinion, D.; Stoeffl, W.; Van Bibber, K.; Daw, E.J. [Massachusetts Inst. of Tech., Cambridge, MA (United States); McBride, J. [Massachusetts Inst. of Tech., Cambridge, MA (United States); Peng, H. [Massachusetts Inst. of Tech., Cambridge, MA (United States); Rosenberg, L.J. [Massachusetts Inst. of Tech., Cambridge, MA (United States); Xin, H. [Massachusetts Inst. of Tech., Cambridge, MA (United States); Laveigne, J. [Florida Univ., Gainesville, FL (United States); Sikivie, P. [Florida Univ., Gainesville, FL (United States); Sullivan, N.S. [Florida Univ., Gainesville, FL (United States); Tanner, D.B. [Florida Univ., Gainesville, FL (United States); Moltz, D.M. [Lawrence Berkeley Lab., CA (United States); Powell, J. [Lawrence Berkeley Lab., CA (United States); Clarke, J. [Lawrence Berkeley Lab., CA (United States); Nezrick, F.A. [Fermi National Accelerator Lab., Batavia, IL (United States); Turner, M.S. [Fermi National Accelerator Lab., Batavia, IL (United States); Golubev, N.A. [Russian Academy of Sciences, Moscow (Russia); Kravchuk, L.V. [Russian Academy of Sciences, Moscow (Russia)

    1998-01-01

    Early results from a large-scale search for dark matter axions are presented. In this experiment, axions constituting our dark-matter halo may be resonantly converted to monochromatic microwave photons in a high-Q microwave cavity permeated by a strong magnetic field. Sensitivity at the level of one important axion model (KSVZ) has been demonstrated.

  14. Large-scale Homogenization of Bulk Materials in Mammoth Silos

    NARCIS (Netherlands)

    Schott, D.L.

    2004-01-01

    This doctoral thesis concerns the large-scale homogenization of bulk materials in mammoth silos. The objective of this research was to determine the best stacking and reclaiming method for homogenization in mammoth silos. For this purpose a simulation program was developed to estimate the homogeniza

  15. Quantized pressure control in large-scale nonlinear hydraulic networks

    NARCIS (Netherlands)

    Persis, Claudio De; Kallesøe, Carsten Skovmose; Jensen, Tom Nørgaard

    2010-01-01

    It was shown previously that semi-global practical pressure regulation at designated points of a large-scale nonlinear hydraulic network is guaranteed by distributed proportional controllers. For a correct implementation of the control laws, each controller, which is located at these designated poin

  16. Information Tailoring Enhancements for Large-Scale Social Data

    Science.gov (United States)

    2016-06-15

    1 Work Performed within This Reporting Period .................................................... 2 1.1 Enhanced Named Entity Recognition (NER...Social Data Progress Report No. 3 Reporting Period: March 16, 2016 – June 15, 2016 Contract No. N00014-15-P-5138 Sponsored by ONR...Automation Incorporated Progress Report No. 3 Information Tailoring Enhancements for Large-Scale Social Data Submitted in accordance with

  17. New Visions for Large Scale Networks: Research and Applications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This paper documents the findings of the March 12-14, 2001 Workshop on New Visions for Large-Scale Networks: Research and Applications. The workshops objectives were...

  18. Planck intermediate results XLII. Large-scale Galactic magnetic fields

    DEFF Research Database (Denmark)

    Adam, R.; Ade, P. A. R.; Alves, M. I. R.

    2016-01-01

    Recent models for the large-scale Galactic magnetic fields in the literature have been largely constrained by synchrotron emission and Faraday rotation measures. We use three different but representative models to compare their predicted polarized synchrotron and dust emission with that measured...

  19. Evidence for large-scale effects of competition: niche displacement in Canada lynx and bobcat

    OpenAIRE

    Michael J.L. Peers; Thornton, Daniel H.; Dennis L Murray

    2013-01-01

    Determining the patterns, causes and consequences of character displacement is central to our understanding of competition in ecological communities. However, the majority of competition research has occurred over small spatial extents or focused on fine-scale differences in morphology or behaviour. The effects of competition on broad-scale distribution and niche characteristics of species remain poorly understood but critically important. Using range-wide species distribution models, we eval...

  20. The multilevel fast multipole algorithm (MLFMA) for solving large-scale computational electromagnetics problems

    CERN Document Server

    Ergul, Ozgur

    2014-01-01

    The Multilevel Fast Multipole Algorithm (MLFMA) for Solving Large-Scale Computational Electromagnetic Problems provides a detailed and instructional overview of implementing MLFMA. The book: Presents a comprehensive treatment of the MLFMA algorithm, including basic linear algebra concepts, recent developments on the parallel computation, and a number of application examplesCovers solutions of electromagnetic problems involving dielectric objects and perfectly-conducting objectsDiscusses applications including scattering from airborne targets, scattering from red

  1. A generic trust framework for large-scale open systems using machine learning

    OpenAIRE

    Liu, Xin; Tredan, Gilles; Datta, Anwitaman

    2011-01-01

    In many large scale distributed systems and on the web, agents need to interact with other unknown agents to carry out some tasks or transactions. The ability to reason about and assess the potential risks in carrying out such transactions is essential for providing a safe and reliable environment. A traditional approach to reason about the trustworthiness of a transaction is to determine the trustworthiness of the specific agent involved, derived from the history of its behavior. As a depart...

  2. CFHTLenS: Mapping the Large Scale Structure with Gravitational Lensing

    CERN Document Server

    Van Waerbeke, Ludovic; Erben, Thomas; Heymans, Catherine; Hildebrandt, Hendrik; Hoekstra, Henk; Kitching, Thomas D; Mellier, Yannick; Miller, Lance; Coupon, Jean; Harnois-Déraps, Joachim; Fu, Liping; Hudson, Michael J; Kilbinger, Martin; Kuijken, Konrad; Rowe, Barnaby T P; Schrabback, Tim; Semboloni, Elisabetta; Vafaei, Sanaz; van Uitert, Edo; Velander, Malin

    2013-01-01

    We present a quantitative analysis of the largest contiguous maps of projected mass density obtained from gravitational lensing shear. We use data from the 154 deg2 covered by the Canada France Hawaii Telescope Lensing Survey. Our study is the first attempt to quantitatively characterize the scientific value of lensing maps, which could serve in the future as a complementary approach to the study of the dark universe with gravitational lensing. We show that mass maps contain unique cosmological information beyond that of traditional two-points statistical analysis techniques. Using a series of numerical simulations, we first show that gravitational lensing inversion provides a reliable probe of the projected matter distribution of large scale structure. We validate our analysis by quantifying the robustness of the maps with various statistical estimators. The same process is then applied to the CFHTLenS data. It is found that the statistical properties of the projected mass are fully consistent with the cosmo...

  3. Series Design of Large-Scale NC Machine Tool

    Institute of Scientific and Technical Information of China (English)

    TANG Zhi

    2007-01-01

    Product system design is a mature concept in western developed countries. It has been applied in war industry during the last century. However, up until now, functional combination is still the main method for product system design in China. Therefore, in terms of a concept of product generation and product interaction we are in a weak position compared with the requirements of global markets. Today, the idea of serial product design has attracted much attention in the design field and the definition of product generation as well as its parameters has already become the standard in serial product designs. Although the design of a large-scale NC machine tool is complicated, it can be further optimized by the precise exercise of object design by placing the concept of platform establishment firmly into serial product design. The essence of a serial product design has been demonstrated by the design process of a large-scale NC machine tool.

  4. Large-scale flow generation by inhomogeneous helicity

    CERN Document Server

    Yokoi, Nobumitsu

    2015-01-01

    The effect of kinetic helicity (velocity--vorticity correlation) on turbulent momentum transport is investigated. The turbulent kinetic helicity (pseudoscalar) enters into the Reynolds stress (mirrorsymmetric tensor) expression in the form of a helicity gradient as the coupling coefficient for the mean vorticity and/or the angular velocity (axial vector), which suggests the possibility of mean-flow generation in the presence of inhomogeneous helicity. This inhomogeneous helicity effect, which was previously confirmed at the level of a turbulence- or closure-model simulation, is examined with the aid of direct numerical simulations of rotating turbulence with non-uniform helicity sustained by an external forcing. The numerical simulations show that the spatial distribution of the Reynolds stress is in agreement with the helicity-related term coupled with the angular velocity, and that a large-scale flow is generated in the direction of angular velocity. Such a large-scale flow is not induced in the case of hom...

  5. Instrumentation Development for Large Scale Hypersonic Inflatable Aerodynamic Decelerator Characterization

    Science.gov (United States)

    Swanson, Gregory T.; Cassell, Alan M.

    2011-01-01

    Hypersonic Inflatable Aerodynamic Decelerator (HIAD) technology is currently being considered for multiple atmospheric entry applications as the limitations of traditional entry vehicles have been reached. The Inflatable Re-entry Vehicle Experiment (IRVE) has successfully demonstrated this technology as a viable candidate with a 3.0 m diameter vehicle sub-orbital flight. To further this technology, large scale HIADs (6.0 8.5 m) must be developed and tested. To characterize the performance of large scale HIAD technology new instrumentation concepts must be developed to accommodate the flexible nature inflatable aeroshell. Many of the concepts that are under consideration for the HIAD FY12 subsonic wind tunnel test series are discussed below.

  6. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  7. Large-scale innovation and change in UK higher education

    Directory of Open Access Journals (Sweden)

    Stephen Brown

    2013-09-01

    Full Text Available This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ technology to deliver such changes. Key lessons that emerged from these experiences are reviewed covering themes of pervasiveness, unofficial systems, project creep, opposition, pressure to deliver, personnel changes and technology issues. The paper argues that collaborative approaches to project management offer greater prospects of effective large-scale change in universities than either management-driven top-down or more champion-led bottom-up methods. It also argues that while some diminution of control over project outcomes is inherent in this approach, this is outweighed by potential benefits of lasting and widespread adoption of agreed changes.

  8. Cluster Galaxy Dynamics and the Effects of Large Scale Environment

    CERN Document Server

    White, Martin; Smit, Renske

    2010-01-01

    We use a high-resolution N-body simulation to study how the influence of large-scale structure in and around clusters causes correlated signals in different physical probes and discuss some implications this has for multi-physics probes of clusters. We pay particular attention to velocity dispersions, matching galaxies to subhalos which are explicitly tracked in the simulation. We find that not only do halos persist as subhalos when they fall into a larger host, groups of subhalos retain their identity for long periods within larger host halos. The highly anisotropic nature of infall into massive clusters, and their triaxiality, translates into an anisotropic velocity ellipsoid: line-of-sight galaxy velocity dispersions for any individual halo show large variance depending on viewing angle. The orientation of the velocity ellipsoid is correlated with the large-scale structure, and thus velocity outliers correlate with outliers caused by projection in other probes. We quantify this orientation uncertainty and ...

  9. Large-Scale Agriculture and Outgrower Schemes in Ethiopia

    DEFF Research Database (Denmark)

    Wendimu, Mengistu Assefa

    As a result of the growing demand for food, feed and industrial raw materials in the first decade of this century, and the usually welcoming policies regarding investors amongst the governments of developing countries, there has been a renewed interest in agriculture and an increase in large...... to ‘land grabbing’ for large-scale farming (i.e. outgrower schemes and contract farming could modernise agricultural production while allowing smallholders to maintain their land ownership), to integrate them into global agro-food value chains and to increase their productivity and welfare. However......, the impact of large-scale agriculture and outgrower schemes on productivity, household welfare and wages in developing countries is highly contentious. Chapter 1 of this thesis provides an introduction to the study, while also reviewing the key debate in the contemporary land ‘grabbing’ and historical large...

  10. Real-time simulation of large-scale floods

    Science.gov (United States)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  11. Magnetic Helicity and Large Scale Magnetic Fields: A Primer

    CERN Document Server

    Blackman, Eric G

    2014-01-01

    Magnetic fields of laboratory, planetary, stellar, and galactic plasmas commonly exhibit significant order on large temporal or spatial scales compared to the otherwise random motions within the hosting system. Such ordered fields can be measured in the case of planets, stars, and galaxies, or inferred indirectly by the action of their dynamical influence, such as jets. Whether large scale fields are amplified in situ or a remnant from previous stages of an object's history is often debated for objects without a definitive magnetic activity cycle. Magnetic helicity, a measure of twist and linkage of magnetic field lines, is a unifying tool for understanding large scale field evolution for both mechanisms of origin. Its importance stems from its two basic properties: (1) magnetic helicity is typically better conserved than magnetic energy; and (2) the magnetic energy associated with a fixed amount of magnetic helicity is minimized when the system relaxes this helical structure to the largest scale available. H...

  12. Ultra-large scale cosmology with next-generation experiments

    CERN Document Server

    Alonso, David; Ferreira, Pedro G; Maartens, Roy; Santos, Mario G

    2015-01-01

    Future surveys of large-scale structure will be able to measure perturbations on the scale of the cosmological horizon, and so could potentially probe a number of novel relativistic effects that are negligibly small on sub-horizon scales. These effects leave distinctive signatures in the power spectra of clustering observables and, if measurable, would open a new window on relativistic cosmology. We quantify the size and detectability of the effects for a range of future large-scale structure surveys: spectroscopic and photometric galaxy redshift surveys, intensity mapping surveys of neutral hydrogen, and continuum surveys of radio galaxies. Our forecasts show that next-generation experiments, reaching out to redshifts z ~ 4, will not be able to detect previously-undetected general-relativistic effects from the single-tracer power spectra alone, although they may be able to measure the lensing magnification in the auto-correlation. We also perform a rigorous joint forecast for the detection of primordial non-...

  13. Bayesian large-scale structure inference and cosmic web analysis

    CERN Document Server

    Leclercq, Florent

    2015-01-01

    Surveys of the cosmic large-scale structure carry opportunities for building and testing cosmological theories about the origin and evolution of the Universe. This endeavor requires appropriate data assimilation tools, for establishing the contact between survey catalogs and models of structure formation. In this thesis, we present an innovative statistical approach for the ab initio simultaneous analysis of the formation history and morphology of the cosmic web: the BORG algorithm infers the primordial density fluctuations and produces physical reconstructions of the dark matter distribution that underlies observed galaxies, by assimilating the survey data into a cosmological structure formation model. The method, based on Bayesian probability theory, provides accurate means of uncertainty quantification. We demonstrate the application of BORG to the Sloan Digital Sky Survey data and describe the primordial and late-time large-scale structure in the observed volume. We show how the approach has led to the fi...

  14. Constraining cosmological ultra-large scale structure using numerical relativity

    CERN Document Server

    Braden, Jonathan; Peiris, Hiranya V; Aguirre, Anthony

    2016-01-01

    Cosmic inflation, a period of accelerated expansion in the early universe, can give rise to large amplitude ultra-large scale inhomogeneities on distance scales comparable to or larger than the observable universe. The cosmic microwave background (CMB) anisotropy on the largest angular scales is sensitive to such inhomogeneities and can be used to constrain the presence of ultra-large scale structure (ULSS). We numerically evolve nonlinear inhomogeneities present at the beginning of inflation in full General Relativity to assess the CMB quadrupole constraint on the amplitude of the initial fluctuations and the size of the observable universe relative to a length scale characterizing the ULSS. To obtain a statistically significant number of simulations, we adopt a toy model in which inhomogeneities are injected along a preferred direction. We compute the likelihood function for the CMB quadrupole including both ULSS and the standard quantum fluctuations produced during inflation. We compute the posterior given...

  15. The complexity nature of large-scale software systems

    Institute of Scientific and Technical Information of China (English)

    Yan Dong; Qi Guo-Ning; Gu Xin-Jian

    2006-01-01

    In software engineering, class diagrams are often used to describe the system's class structures in Unified Modelling Language (UML). A class diagram, as a graph, is a collection of static declarative model elements, such as classes, interfaces, and the relationships of their connections with each other. In this paper, class graphs are examined within several Java software systems provided by Sun and IBM, and some new features are found. For a large-scale Java software system, its in-degree distribution tends to an exponential distribution, while its out-degree and degree distributions reveal the power-law behaviour. And then a directed preferential-random model is established to describe the corresponding degree distribution features and evolve large-scale Java software systems.

  16. Optimization of Survivability Analysis for Large-Scale Engineering Networks

    CERN Document Server

    Poroseva, S V

    2012-01-01

    Engineering networks fall into the category of large-scale networks with heterogeneous nodes such as sources and sinks. The survivability analysis of such networks requires the analysis of the connectivity of the network components for every possible combination of faults to determine a network response to each combination of faults. From the computational complexity point of view, the problem belongs to the class of exponential time problems at least. Partially, the problem complexity can be reduced by mapping the initial topology of a complex large-scale network with multiple sources and multiple sinks onto a set of smaller sub-topologies with multiple sources and a single sink connected to the network of sources by a single link. In this paper, the mapping procedure is applied to the Florida power grid.

  17. Distant galaxy clusters in the XMM Large Scale Structure survey

    CERN Document Server

    Willis, J P; Bremer, M N; Pierre, M; Adami, C; Ilbert, O; Maughan, B; Maurogordato, S; Pacaud, F; Valtchanov, I; Chiappetti, L; Thanjavur, K; Gwyn, S; Stanway, E R; Winkworth, C

    2012-01-01

    (Abridged) Distant galaxy clusters provide important tests of the growth of large scale structure in addition to highlighting the process of galaxy evolution in a consistently defined environment at large look back time. We present a sample of 22 distant (z>0.8) galaxy clusters and cluster candidates selected from the 9 deg2 footprint of the overlapping X-ray Multi Mirror (XMM) Large Scale Structure (LSS), CFHTLS Wide and Spitzer SWIRE surveys. Clusters are selected as extended X-ray sources with an accompanying overdensity of galaxies displaying optical to mid-infrared photometry consistent with z>0.8. Nine clusters have confirmed spectroscopic redshifts in the interval 0.80.8 clusters.

  18. In the fast lane: large-scale bacterial genome engineering.

    Science.gov (United States)

    Fehér, Tamás; Burland, Valerie; Pósfai, György

    2012-07-31

    The last few years have witnessed rapid progress in bacterial genome engineering. The long-established, standard ways of DNA synthesis, modification, transfer into living cells, and incorporation into genomes have given way to more effective, large-scale, robust genome modification protocols. Expansion of these engineering capabilities is due to several factors. Key advances include: (i) progress in oligonucleotide synthesis and in vitro and in vivo assembly methods, (ii) optimization of recombineering techniques, (iii) introduction of parallel, large-scale, combinatorial, and automated genome modification procedures, and (iv) rapid identification of the modifications by barcode-based analysis and sequencing. Combination of the brute force of these techniques with sophisticated bioinformatic design and modeling opens up new avenues for the analysis of gene functions and cellular network interactions, but also in engineering more effective producer strains. This review presents a summary of recent technological advances in bacterial genome engineering.

  19. Electric vehicles and large-scale integration of wind power

    DEFF Research Database (Denmark)

    Liu, Wen; Hu, Weihao; Lund, Henrik;

    2013-01-01

    was 6.5% in 2009 and which has the plan to develop large-scale wind power. The results show that electric vehicles (EVs) have the ability to balance the electricity demand and supply and to further the wind power integration. In the best case, the energy system with EV can increase wind power...... with this imbalance and to reduce its high dependence on oil production. For this reason, it is interesting to analyse the extent to which transport electrification can further the renewable energy integration. This paper quantifies this issue in Inner Mongolia, where the share of wind power in the electricity supply......Renewable energy is one of the possible solutions when addressing climate change. Today, large-scale renewable energy integration needs to include the experience to balance the discrepancy between electricity demand and supply. The electrification of transportation may have the potential to deal...

  20. Foundations of Large-Scale Multimedia Information Management and Retrieval

    CERN Document Server

    Chang, Edward Y

    2011-01-01

    "Foundations of Large-Scale Multimedia Information Management and Retrieval - Mathematics of Perception" covers knowledge representation and semantic analysis of multimedia data and scalability in signal extraction, data mining, and indexing. The book is divided into two parts: Part I - Knowledge Representation and Semantic Analysis focuses on the key components of mathematics of perception as it applies to data management and retrieval. These include feature selection/reduction, knowledge representation, semantic analysis, distance function formulation for measuring similarity, and

  1. Large Scale Density Estimation of Blue and Fin Whales (LSD)

    Science.gov (United States)

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...sensors, or both. The goal of this research is to develop and implement a new method for estimating blue and fin whale density that is effective over...develop and implement a density estimation methodology for quantifying blue and fin whale abundance from passive acoustic data recorded on sparse

  2. A Large-Scale Study of Online Shopping Behavior

    OpenAIRE

    Nalchigar, Soroosh; Weber, Ingmar

    2012-01-01

    The continuous growth of electronic commerce has stimulated great interest in studying online consumer behavior. Given the significant growth in online shopping, better understanding of customers allows better marketing strategies to be designed. While studies of online shopping attitude are widespread in the literature, studies of browsing habits differences in relation to online shopping are scarce. This research performs a large scale study of the relationship between Internet browsing hab...

  3. One-dimensional adhesion model for large scale structures

    Directory of Open Access Journals (Sweden)

    Kayyunnapara Thomas Joseph

    2010-05-01

    Full Text Available We discuss initial value problems and initial boundary value problems for some systems of partial differential equations appearing in the modelling for the large scale structure formation in the universe. We restrict the initial data to be bounded measurable and locally bounded variation function and use Volpert product to justify the product which appear in the equation. For more general initial data in the class of generalized functions of Colombeau, we construct the solution in the sense of association.

  4. Large-Scale Optimization for Bayesian Inference in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Willcox, Karen [MIT; Marzouk, Youssef [MIT

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to

  5. A Large Scale, High Resolution Agent-Based Insurgency Model

    Science.gov (United States)

    2013-09-30

    contains color images. 14. ABSTRACT Recent years have seen a large growth in research developed aimed at modeling the intricate social-cultural climate in...a relatively small (40x40) lattice with toroidal structure, such that agent movement that extended beyond a grid edge would appear on the opposite...locations containing obstacles such as rivers. 8 A Large Scale, High Resolution Agent-Based Insurgency Model Figure 3. Black and white mask of the

  6. Large scale-small scale duality and cosmological constant

    CERN Document Server

    Darabi, F

    1999-01-01

    We study a model of quantum cosmology originating from a classical model of gravitation where a self interacting scalar field is coupled to gravity with the metric undergoing a signature transition. We show that there are dual classical signature changing solutions, one at large scales and the other at small scales. It is possible to fine-tune the physics in both scales with an infinitesimal effective cosmological constant.

  7. Concurrent Programming Using Actors: Exploiting Large-Scale Parallelism,

    Science.gov (United States)

    1985-10-07

    ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASK* Artificial Inteligence Laboratory AREA Is WORK UNIT NUMBERS 545 Technology Square...D-R162 422 CONCURRENT PROGRMMIZNG USING f"OS XL?ITP TEH l’ LARGE-SCALE PARALLELISH(U) NASI AC E Al CAMBRIDGE ARTIFICIAL INTELLIGENCE L. G AGHA ET AL...RESOLUTION TEST CHART N~ATIONAL BUREAU OF STANDA.RDS - -96 A -E. __ _ __ __’ .,*- - -- •. - MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL

  8. Robust regression for large-scale neuroimaging studies.

    OpenAIRE

    2015-01-01

    PUBLISHED Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypot...

  9. Measuring large scale space perception in literary texts

    Science.gov (United States)

    Rossi, Paolo

    2007-07-01

    A center and radius of “perception” (in the sense of environmental cognition) can be formally associated with a written text and operationally defined. Simple algorithms for their computation are presented, and indicators for anisotropy in large scale space perception are introduced. The relevance of these notions for the analysis of literary and historical records is briefly discussed and illustrated with an example taken from medieval historiography.

  10. Turbulent large-scale structure effects on wake meandering

    Science.gov (United States)

    Muller, Y.-A.; Masson, C.; Aubrun, S.

    2015-06-01

    This work studies effects of large-scale turbulent structures on wake meandering using Large Eddy Simulations (LES) over an actuator disk. Other potential source of wake meandering such as the instablility mechanisms associated with tip vortices are not treated in this study. A crucial element of the efficient, pragmatic and successful simulations of large-scale turbulent structures in Atmospheric Boundary Layer (ABL) is the generation of the stochastic turbulent atmospheric flow. This is an essential capability since one source of wake meandering is these large - larger than the turbine diameter - turbulent structures. The unsteady wind turbine wake in ABL is simulated using a combination of LES and actuator disk approaches. In order to dedicate the large majority of the available computing power in the wake, the ABL ground region of the flow is not part of the computational domain. Instead, mixed Dirichlet/Neumann boundary conditions are applied at all the computational surfaces except at the outlet. Prescribed values for Dirichlet contribution of these boundary conditions are provided by a stochastic turbulent wind generator. This allows to simulate large-scale turbulent structures - larger than the computational domain - leading to an efficient simulation technique of wake meandering. Since the stochastic wind generator includes shear, the turbulence production is included in the analysis without the necessity of resolving the flow near the ground. The classical Smagorinsky sub-grid model is used. The resulting numerical methodology has been implemented in OpenFOAM. Comparisons with experimental measurements in porous-disk wakes have been undertaken, and the agreements are good. While temporal resolution in experimental measurements is high, the spatial resolution is often too low. LES numerical results provide a more complete spatial description of the flow. They tend to demonstrate that inflow low frequency content - or large- scale turbulent structures - is

  11. Experimental simulation of microinteractions in large scale explosions

    Energy Technology Data Exchange (ETDEWEB)

    Chen, X.; Luo, R.; Yuen, W.W.; Theofanous, T.G. [California Univ., Santa Barbara, CA (United States). Center for Risk Studies and Safety

    1998-01-01

    This paper presents data and analysis of recent experiments conducted in the SIGMA-2000 facility to simulate microinteractions in large scale explosions. Specifically, the fragmentation behavior of a high temperature molten steel drop under high pressure (beyond critical) conditions are investigated. The current data demonstrate, for the first time, the effect of high pressure in suppressing the thermal effect of fragmentation under supercritical conditions. The results support the microinteractions idea, and the ESPROSE.m prediction of fragmentation rate. (author)

  12. Large-Scale Integrated Carbon Nanotube Gas Sensors

    OpenAIRE

    Kim, Joondong

    2012-01-01

    Carbon nanotube (CNT) is a promising one-dimensional nanostructure for various nanoscale electronics. Additionally, nanostructures would provide a significant large surface area at a fixed volume, which is an advantage for high-responsive gas sensors. However, the difficulty in fabrication processes limits the CNT gas sensors for the large-scale production. We review the viable scheme for large-area application including the CNT gas sensor fabrication and reaction mechanism with a practical d...

  13. The Phoenix series large scale LNG pool fire experiments.

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  14. Large-scale Alfvén vortices

    Energy Technology Data Exchange (ETDEWEB)

    Onishchenko, O. G., E-mail: onish@ifz.ru [Institute of Physics of the Earth, 10 B. Gruzinskaya, 123242 Moscow, Russian Federation and Space Research Institute, 84/32 Profsouznaya str., 117997 Moscow (Russian Federation); Pokhotelov, O. A., E-mail: pokh@ifz.ru [Institute of Physics of the Earth, 10 B. Gruzinskaya, 123242 Moscow (Russian Federation); Horton, W., E-mail: wendell.horton@gmail.com [Institute for Fusion Studies and Applied Research Laboratory, University of Texas at Austin, Austin, Texas 78713 (United States); Scullion, E., E-mail: scullie@tcd.ie [School of Physics, Trinity College Dublin, Dublin 2 (Ireland); Fedun, V., E-mail: v.fedun@sheffield.ac.uk [Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield S13JD (United Kingdom)

    2015-12-15

    The new type of large-scale vortex structures of dispersionless Alfvén waves in collisionless plasma is investigated. It is shown that Alfvén waves can propagate in the form of Alfvén vortices of finite characteristic radius and characterised by magnetic flux ropes carrying orbital angular momentum. The structure of the toroidal and radial velocity, fluid and magnetic field vorticity, the longitudinal electric current in the plane orthogonal to the external magnetic field are discussed.

  15. Multimodel Design of Large Scale Systems with Multiple Decision Makers.

    Science.gov (United States)

    1982-08-01

    virtue. 5- , Lead me from darkneu to light. - Lead me from death to eternal Life. ( Vedic Payer) p. I, MULTIMODEL DESIGN OF LARGE SCALE SYSTEMS WITH...BFI-S2L) is stable for all e in H. To avoid mathematical complications, the feedback matrices of (2.31) are restricted to be of the form, S(e)= Fli + 0...control values used during all past sampling intervals. This information pattern, though not of ouch practical importance, is mathematically con

  16. Large-Scale Weather Disturbances in Mars’ Southern Extratropics

    Science.gov (United States)

    Hollingsworth, Jeffery L.; Kahre, Melinda A.

    2015-11-01

    Between late autumn and early spring, Mars’ middle and high latitudes within its atmosphere support strong mean thermal gradients between the tropics and poles. Observations from both the Mars Global Surveyor (MGS) and Mars Reconnaissance Orbiter (MRO) indicate that this strong baroclinicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). These extratropical weather disturbances are key components of the global circulation. Such wave-like disturbances act as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of large-scale, traveling extratropical synoptic-period disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively lifted and radiatively active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to their northern-hemisphere counterparts, southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are examined. Simulations that adapt Mars’ full topography compared to simulations that utilize synthetic topographies emulating key large-scale features of the southern middle latitudes indicate that Mars’ transient barotropic/baroclinic eddies are highly influenced by the great impact basins of this hemisphere (e.g., Argyre and Hellas). The occurrence of a southern storm zone in late winter and early spring appears to be anchored to the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre

  17. Capabilities of the Large-Scale Sediment Transport Facility

    Science.gov (United States)

    2016-04-01

    experiments in wave/current environments. INTRODUCTION : The LSTF (Figure 1) is a large-scale laboratory facility capable of simulating conditions...into a cDAQ-9184 CompactDAQ Chassis via a 16- channel NI 9220. The four offshore wave gauge signals are broadcasted wirelessly via a National...Instrument 9201 paired with a cDAQ-9191. A wireless router then moves this data onto the local area network. A Cisco SG 200 switch is used to manage the

  18. Large Scale Magnetic Fields: Density Power Spectrum in Redshift Space

    Indian Academy of Sciences (India)

    Rajesh Gopal; Shiv K. Sethi

    2003-09-01

    We compute the density redshift-space power spectrum in the presence of tangled magnetic fields and compare it with existing observations. Our analysis shows that if these magnetic fields originated in the early universe then it is possible to construct models for which the shape of the power spectrum agrees with the large scale slope of the observed power spectrum. However requiring compatibility with observed CMBR anisotropies, the normalization of the power spectrum is too low for magnetic fields to have significant impact on the large scale structure at present. Magnetic fields of a more recent origin generically give density power spectrum ∝ 4 which doesn’t agree with the shape of the observed power spectrum at any scale. Magnetic fields generate curl modes of the velocity field which increase both the quadrupole and hexadecapole of the redshift space power spectrum. For curl modes, the hexadecapole dominates over quadrupole. So the presence of curl modes could be indicated by an anomalously large hexadecapole, which has not yet been computed from observation. It appears difficult to construct models in which tangled magnetic fields could have played a major role in shaping the large scale structure in the present epoch. However if they did, one of the best ways to infer their presence would be from the redshift space effects in the density power spectrum.

  19. Searching for Large Scale Structure in Deep Radio Surveys

    CERN Document Server

    Baleisis, A; Loan, A J; Wall, J V; Baleisis, Audra; Lahav, Ofer; Loan, Andrew J.; Wall, Jasper V.

    1997-01-01

    (Abridged Abstract) We calculate the expected amplitude of the dipole and higher spherical harmonics in the angular distribution of radio galaxies. The median redshift of radio sources in existing catalogues is z=1, which allows us to study large scale structure on scales between those accessible to present optical and infrared surveys, and that of the Cosmic Microwave Background (CMB). The dipole is due to 2 effects which turn out to be of comparable magnitude: (i) our motion with respect to the CMB, and (ii) large scale structure, parameterised here by a family of Cold Dark Matter power-spectra. We make specific predictions for the Green Bank (87GB) and Parkes-MIT-NRAO (PMN) catalogues. For these relatively sparse catalogues both the motion and large scale structure dipole effects are expected to be smaller than the Poisson shot-noise. However, we detect dipole and higher harmonics in the combined 87GB-PMN catalogue which are far larger than expected. We attribute this to a 2 % flux mismatch between the two...

  20. Geospatial Optimization of Siting Large-Scale Solar Projects

    Energy Technology Data Exchange (ETDEWEB)

    Macknick, J.; Quinby, T.; Caulfield, E.; Gerritsen, M.; Diffendorfer, J.; Haines, S.

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  1. Star formation associated with a large-scale infrared bubble

    CERN Document Server

    Xu, Jin-Long

    2014-01-01

    Using the data from the Galactic Ring Survey (GRS) and Galactic Legacy Infrared Mid-Plane Survey Extraordinaire (GLIMPSE), we performed a study for a large-scale infrared bubble with a size of about 16 pc at a distance of 2.0 kpc. We present the 12CO J=1-0, 13CO J=1-0 and C18O J=1-0 observations of HII region G53.54-0.01 (Sh2-82) obtained at the the Purple Mountain Observation (PMO) 13.7 m radio telescope to investigate the detailed distribution of associated molecular material. The large-scale infrared bubble shows a half-shell morphology at 8 um. H II regions G53.54-0.01, G53.64+0.24, and G54.09-0.06 are situated on the bubble. Comparing the radio recombination line velocities and associated 13CO J=1-0 components of the three H II regions, we found that the 8 um emission associated with H II region G53.54-0.01 should belong to the foreground emission, and only overlap with the large-scale infrared bubble in the line of sight. Three extended green objects (EGOs, the candidate massive young stellar objects), ...

  2. Equivalent common path method in large-scale laser comparator

    Science.gov (United States)

    He, Mingzhao; Li, Jianshuang; Miao, Dongjing

    2015-02-01

    Large-scale laser comparator is main standard device that providing accurate, reliable and traceable measurements for high precision large-scale line and 3D measurement instruments. It mainly composed of guide rail, motion control system, environmental parameters monitoring system and displacement measurement system. In the laser comparator, the main error sources are temperature distribution, straightness of guide rail and pitch and yaw of measuring carriage. To minimize the measurement uncertainty, an equivalent common optical path scheme is proposed and implemented. Three laser interferometers are adjusted to parallel with the guide rail. The displacement in an arbitrary virtual optical path is calculated using three displacements without the knowledge of carriage orientations at start and end positions. The orientation of air floating carriage is calculated with displacements of three optical path and position of three retroreflectors which are precisely measured by Laser Tracker. A 4th laser interferometer is used in the virtual optical path as reference to verify this compensation method. This paper analyzes the effect of rail straightness on the displacement measurement. The proposed method, through experimental verification, can improve the measurement uncertainty of large-scale laser comparator.

  3. A visualization framework for large-scale virtual astronomy

    Science.gov (United States)

    Fu, Chi-Wing

    Motivated by advances in modern positional astronomy, this research attempts to digitally model the entire Universe through computer graphics technology. Our first challenge is space itself. The gigantic size of the Universe makes it impossible to put everything into a typical graphics system at its own scale. The graphics rendering process can easily fail because of limited computational precision, The second challenge is that the enormous amount of data could slow down the graphics; we need clever techniques to speed up the rendering. Third, since the Universe is dominated by empty space, objects are widely separated; this makes navigation difficult. We attempt to tackle these problems through various techniques designed to extend and optimize the conventional graphics framework, including the following: power homogeneous coordinates for large-scale spatial representations, generalized large-scale spatial transformations, and rendering acceleration via environment caching and object disappearance criteria. Moreover, we implemented an assortment of techniques for modeling and rendering a variety of astronomical bodies, ranging from the Earth up to faraway galaxies, and attempted to visualize cosmological time; a method we call the Lightcone representation was introduced to visualize the whole space-time of the Universe at a single glance. In addition, several navigation models were developed to handle the large-scale navigation problem. Our final results include a collection of visualization tools, two educational animations appropriate for planetarium audiences, and state-of-the-art-advancing rendering techniques that can be transferred to practice in digital planetarium systems.

  4. Robust regression for large-scale neuroimaging studies.

    Science.gov (United States)

    Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2015-05-01

    Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies.

  5. Image-based Exploration of Large-Scale Pathline Fields

    KAUST Repository

    Nagoor, Omniah H.

    2014-05-27

    While real-time applications are nowadays routinely used in visualizing large nu- merical simulations and volumes, handling these large-scale datasets requires high-end graphics clusters or supercomputers to process and visualize them. However, not all users have access to powerful clusters. Therefore, it is challenging to come up with a visualization approach that provides insight to large-scale datasets on a single com- puter. Explorable images (EI) is one of the methods that allows users to handle large data on a single workstation. Although it is a view-dependent method, it combines both exploration and modification of visual aspects without re-accessing the original huge data. In this thesis, we propose a novel image-based method that applies the concept of EI in visualizing large flow-field pathlines data. The goal of our work is to provide an optimized image-based method, which scales well with the dataset size. Our approach is based on constructing a per-pixel linked list data structure in which each pixel contains a list of pathlines segments. With this view-dependent method it is possible to filter, color-code and explore large-scale flow data in real-time. In addition, optimization techniques such as early-ray termination and deferred shading are applied, which further improves the performance and scalability of our approach.

  6. Reliability assessment for components of large scale photovoltaic systems

    Science.gov (United States)

    Ahadi, Amir; Ghadimi, Noradin; Mirabbasi, Davar

    2014-10-01

    Photovoltaic (PV) systems have significantly shifted from independent power generation systems to a large-scale grid-connected generation systems in recent years. The power output of PV systems is affected by the reliability of various components in the system. This study proposes an analytical approach to evaluate the reliability of large-scale, grid-connected PV systems. The fault tree method with an exponential probability distribution function is used to analyze the components of large-scale PV systems. The system is considered in the various sequential and parallel fault combinations in order to find all realistic ways in which the top or undesired events can occur. Additionally, it can identify areas that the planned maintenance should focus on. By monitoring the critical components of a PV system, it is possible not only to improve the reliability of the system, but also to optimize the maintenance costs. The latter is achieved by informing the operators about the system component's status. This approach can be used to ensure secure operation of the system by its flexibility in monitoring system applications. The implementation demonstrates that the proposed method is effective and efficient and can conveniently incorporate more system maintenance plans and diagnostic strategies.

  7. BILGO: Bilateral greedy optimization for large scale semidefinite programming

    KAUST Repository

    Hao, Zhifeng

    2013-10-03

    Many machine learning tasks (e.g. metric and manifold learning problems) can be formulated as convex semidefinite programs. To enable the application of these tasks on a large-scale, scalability and computational efficiency are considered as desirable properties for a practical semidefinite programming algorithm. In this paper, we theoretically analyze a new bilateral greedy optimization (denoted BILGO) strategy in solving general semidefinite programs on large-scale datasets. As compared to existing methods, BILGO employs a bilateral search strategy during each optimization iteration. In such an iteration, the current semidefinite matrix solution is updated as a bilateral linear combination of the previous solution and a suitable rank-1 matrix, which can be efficiently computed from the leading eigenvector of the descent direction at this iteration. By optimizing for the coefficients of the bilateral combination, BILGO reduces the cost function in every iteration until the KKT conditions are fully satisfied, thus, it tends to converge to a global optimum. In fact, we prove that BILGO converges to the global optimal solution at a rate of O(1/k), where k is the iteration counter. The algorithm thus successfully combines the efficiency of conventional rank-1 update algorithms and the effectiveness of gradient descent. Moreover, BILGO can be easily extended to handle low rank constraints. To validate the effectiveness and efficiency of BILGO, we apply it to two important machine learning tasks, namely Mahalanobis metric learning and maximum variance unfolding. Extensive experimental results clearly demonstrate that BILGO can solve large-scale semidefinite programs efficiently.

  8. Systematic Literature Review of Agile Scalability for Large Scale Projects

    Directory of Open Access Journals (Sweden)

    Hina saeeda

    2015-09-01

    Full Text Available In new methods, “agile” has come out as the top approach in software industry for the development of the soft wares. With different shapes agile is applied for handling the issues such as low cost, tight time to market schedule continuously changing requirements, Communication & Coordination, team size and distributed environment. Agile has proved to be successful in the small and medium size project, however, it have several limitations when applied on large size projects. The purpose of this study is to know agile techniques in detail, finding and highlighting its restrictions for large size projects with the help of systematic literature review. The systematic literature review is going to find answers for the Research questions: 1 How to make agile approaches scalable and adoptable for large projects?2 What are the existing methods, approaches, frameworks and practices support agile process in large scale projects? 3 What are limitations of existing agile approaches, methods, frameworks and practices with reference to large scale projects? This study will identify the current research problems of the agile scalability for large size projects by giving a detail literature review of the identified problems, existed work for providing solution to these problems and will find out limitations of the existing work for covering the identified problems in the agile scalability. All the results gathered will be summarized statistically based on these finding remedial work will be planned in future for handling the identified limitations of agile approaches for large scale projects.

  9. Impact of Large-scale Geological Architectures On Recharge

    Science.gov (United States)

    Troldborg, L.; Refsgaard, J. C.; Engesgaard, P.; Jensen, K. H.

    Geological and hydrogeological data constitutes the basis for assessment of ground- water flow pattern and recharge zones. The accessibility and applicability of hard ge- ological data is often a major obstacle in deriving plausible conceptual models. Nev- ertheless focus is often on parameter uncertainty caused by the effect of geological heterogeneity due to lack of hard geological data, thus neglecting the possibility of alternative conceptualizations of the large-scale geological architecture. For a catchment in the eastern part of Denmark we have constructed different geologi- cal models based on different conceptualization of the major geological trends and fa- cies architecture. The geological models are equally plausible in a conceptually sense and they are all calibrated to well head and river flow measurements. Comparison of differences in recharge zones and subsequently well protection zones emphasize the importance of assessing large-scale geological architecture in hydrological modeling on regional scale in a non-deterministic way. Geostatistical modeling carried out in a transitional probability framework shows the possibility of assessing multiple re- alizations of large-scale geological architecture from a combination of soft and hard geological information.

  10. Critical thinking, politics on a large scale and media democracy

    Directory of Open Access Journals (Sweden)

    José Antonio IBÁÑEZ-MARTÍN

    2015-06-01

    Full Text Available The first approximation to the social current reality offers us numerous motives for the worry. The spectacle of violence and of immorality can scare us easily. But more worrying still it is to verify that the horizon of conviviality, peace and wellbeing that Europe had been developing from the Treaty of Rome of 1957 has compromised itself seriously for the economic crisis. Today we are before an assault to the democratic politics, which is qualified, on the part of the media democracy, as an exhausted system, which is required to be changed into a new and great politics, a politics on a large scale. The article analyses the concept of a politics on a large scale, primarily attending to Nietzsche, and noting its union with the great philosophy and the great education. The study of the texts of Nietzsche leads us to the conclusion of how in them we often find an interesting analysis of the problems and a misguided proposal for solutions. We cannot think to suggest solutions to all the problems, but we outline various proposals about changes of political activity, that reasonably are defended from the media democracy. In conclusion, we point out that a politics on a large scale requires statesmen, able to suggest modes of life in common that can structure a long-term coexistence.

  11. Large scale structure around a z=2.1 cluster

    CERN Document Server

    Hung, Chao-Ling; Chiang, Yi-Kuan; Capak, Peter; Cowley, Michael J; Darvish, Behnam; Kacprzak, Glenn G; Kovac, K; Lilly, Simon J; Nanayakkara, Themiya; Spitler, Lee R; Tran, Kim-Vy H; Yuan, Tiantian

    2016-01-01

    The most prodigious starburst galaxies are absent in massive galaxy clusters today, but their connection with large scale environments is less clear at $z\\gtrsim2$. We present a search of large scale structure around a galaxy cluster core at $z=2.095$ using a set of spectroscopically confirmed galaxies. We find that both color-selected star-forming galaxies (SFGs) and dusty star-forming galaxies (DSFGs) show significant overdensities around the $z=2.095$ cluster. A total of 8 DSFGs (including 3 X-ray luminous active galactic nuclei, AGNs) and 34 SFGs are found within a 10 arcmin radius (corresponds to $\\sim$15 cMpc at $z\\sim2.1$) from the cluster center and within a redshift range of $\\Delta z=0.02$, which leads to galaxy overdensities of $\\delta_{\\rm DSFG}\\sim12.3$ and $\\delta_{\\rm SFG}\\sim2.8$. The cluster core and the extended DSFG- and SFG-rich structure together demonstrate an active cluster formation phase, in which the cluster is accreting a significant amount of material from large scale structure whi...

  12. Robust large-scale parallel nonlinear solvers for simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any

  13. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    CERN Document Server

    Fonseca, Ricardo A; Fiúza, Frederico; Davidson, Asher; Tsung, Frank S; Mori, Warren B; Silva, Luís O

    2013-01-01

    A new generation of laser wakefield accelerators, supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modeling for further understanding of the underlying physics and identification of optimal regimes, but large scale modeling of these scenarios is computationally heavy and requires efficient use of state-of-the-art Petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed / shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modeling of LWFA, demonstrating speedups of over 1 order of magni...

  14. Large-scale lattice Boltzmann simulations of complex fluids: advances through the advent of computational Grids.

    Science.gov (United States)

    Harting, Jens; Chin, Jonathan; Venturoli, Maddalena; Coveney, Peter V

    2005-08-15

    During the last 2.5 years, the RealityGrid project has allowed us to be one of the few scientific groups involved in the development of computational Grids. Since smoothly working production Grids are not yet available, we have been able to substantially influence the direction of software and Grid deployment within the project. In this paper, we review our results from large-scale three-dimensional lattice Boltzmann simulations performed over the last 2.5 years. We describe how the proactive use of computational steering, and advanced job migration and visualization techniques enabled us to do our scientific work more efficiently. The projects reported on in this paper are studies of complex fluid flows under shear or in porous media, as well as large-scale parameter searches, and studies of the self-organization of liquid cubic mesophases.

  15. Memory Effects in Turbulent Dynamo Generation and Propagation of Large Scale Magnetic Field

    CERN Document Server

    Fedotov, S; Zubarev, A; Fedotov, Sergei; Ivanov, Alexey; Zubarev, Andrey

    2001-01-01

    We are concerned with large scale magnetic field dynamo generation and propagation of magnetic fronts in turbulent electrically conducting fluids. An effective equation for the large scale magnetic field is developed here that takes into account the finite correlation times of the turbulent flow. This equation involves the memory integrals corresponding to the dynamo source term describing the alpha-effect and turbulent transport of magnetic field. We find that the memory effects can drastically change the dynamo growth rate, in particular, non-local turbulent transport might increase the growth rate several times compared to the conventional gradient transport expression. Moreover, the integral turbulent transport term leads to a large decrease of the speed of magnetic front propagation.

  16. The co-evolution of social institutions, demography, and large-scale human cooperation.

    Science.gov (United States)

    Powers, Simon T; Lehmann, Laurent

    2013-11-01

    Human cooperation is typically coordinated by institutions, which determine the outcome structure of the social interactions individuals engage in. Explaining the Neolithic transition from small- to large-scale societies involves understanding how these institutions co-evolve with demography. We study this using a demographically explicit model of institution formation in a patch-structured population. Each patch supports both social and asocial niches. Social individuals create an institution, at a cost to themselves, by negotiating how much of the costly public good provided by cooperators is invested into sanctioning defectors. The remainder of their public good is invested in technology that increases carrying capacity, such as irrigation systems. We show that social individuals can invade a population of asocials, and form institutions that support high levels of cooperation. We then demonstrate conditions where the co-evolution of cooperation, institutions, and demographic carrying capacity creates a transition from small- to large-scale social groups.

  17. Three-dimensional dynamics of collisionless magnetic reconnection in large-scale pair plasmas.

    Science.gov (United States)

    Yin, L; Daughton, W; Karimabadi, H; Albright, B J; Bowers, Kevin J; Margulies, J

    2008-09-19

    Using the largest three-dimensional particle-in-cell simulations to date, collisionless magnetic reconnection in large-scale electron-positron plasmas without a guide field is shown to involve complex interaction of tearing and kink modes. The reconnection onset is patchy and occurs at multiple sites which self-organize to form a single, large diffusion region. The diffusion region tends to elongate in the outflow direction and become unstable to secondary kinking and formation of "plasmoid-rope" structures with finite extent in the current direction. The secondary kink folds the reconnection current layer, while plasmoid ropes at times follow the folding of the current layer. The interplay between these secondary instabilities plays a key role in controlling the time-dependent reconnection rate in large-scale systems.

  18. Application of simplified models to CO2 migration and immobilization in large-scale geological systems

    KAUST Repository

    Gasda, Sarah E.

    2012-07-01

    Long-term stabilization of injected carbon dioxide (CO 2) is an essential component of risk management for geological carbon sequestration operations. However, migration and trapping phenomena are inherently complex, involving processes that act over multiple spatial and temporal scales. One example involves centimeter-scale density instabilities in the dissolved CO 2 region leading to large-scale convective mixing that can be a significant driver for CO 2 dissolution. Another example is the potentially important effect of capillary forces, in addition to buoyancy and viscous forces, on the evolution of mobile CO 2. Local capillary effects lead to a capillary transition zone, or capillary fringe, where both fluids are present in the mobile state. This small-scale effect may have a significant impact on large-scale plume migration as well as long-term residual and dissolution trapping. Computational models that can capture both large and small-scale effects are essential to predict the role of these processes on the long-term storage security of CO 2 sequestration operations. Conventional modeling tools are unable to resolve sufficiently all of these relevant processes when modeling CO 2 migration in large-scale geological systems. Herein, we present a vertically-integrated approach to CO 2 modeling that employs upscaled representations of these subgrid processes. We apply the model to the Johansen formation, a prospective site for sequestration of Norwegian CO 2 emissions, and explore the sensitivity of CO 2 migration and trapping to subscale physics. Model results show the relative importance of different physical processes in large-scale simulations. The ability of models such as this to capture the relevant physical processes at large spatial and temporal scales is important for prediction and analysis of CO 2 storage sites. © 2012 Elsevier Ltd.

  19. Statistical Modeling of Large-Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Eliassi-Rad, T; Baldwin, C; Abdulla, G; Critchlow, T

    2003-11-15

    With the advent of massively parallel computer systems, scientists are now able to simulate complex phenomena (e.g., explosions of a stars). Such scientific simulations typically generate large-scale data sets over the spatio-temporal space. Unfortunately, the sheer sizes of the generated data sets make efficient exploration of them impossible. Constructing queriable statistical models is an essential step in helping scientists glean new insight from their computer simulations. We define queriable statistical models to be descriptive statistics that (1) summarize and describe the data within a user-defined modeling error, and (2) are able to answer complex range-based queries over the spatiotemporal dimensions. In this chapter, we describe systems that build queriable statistical models for large-scale scientific simulation data sets. In particular, we present our Ad-hoc Queries for Simulation (AQSim) infrastructure, which reduces the data storage requirements and query access times by (1) creating and storing queriable statistical models of the data at multiple resolutions, and (2) evaluating queries on these models of the data instead of the entire data set. Within AQSim, we focus on three simple but effective statistical modeling techniques. AQSim's first modeling technique (called univariate mean modeler) computes the ''true'' (unbiased) mean of systematic partitions of the data. AQSim's second statistical modeling technique (called univariate goodness-of-fit modeler) uses the Andersen-Darling goodness-of-fit method on systematic partitions of the data. Finally, AQSim's third statistical modeling technique (called multivariate clusterer) utilizes the cosine similarity measure to cluster the data into similar groups. Our experimental evaluations on several scientific simulation data sets illustrate the value of using these statistical models on large-scale simulation data sets.

  20. Nonlinear evolution of large-scale structure in the universe

    Energy Technology Data Exchange (ETDEWEB)

    Frenk, C.S.; White, S.D.M.; Davis, M.

    1983-08-15

    Using N-body simulations we study the nonlinear development of primordial density perturbation in an Einstein--de Sitter universe. We compare the evolution of an initial distribution without small-scale density fluctuations to evolution from a random Poisson distribution. These initial conditions mimic the assumptions of the adiabatic and isothermal theories of galaxy formation. The large-scale structures which form in the two cases are markedly dissimilar. In particular, the correlation function xi(r) and the visual appearance of our adiabatic (or ''pancake'') models match better the observed distribution of galaxies. This distribution is characterized by large-scale filamentary structure. Because the pancake models do not evolve in a self-similar fashion, the slope of xi(r) steepens with time; as a result there is a unique epoch at which these models fit the galaxy observations. We find the ratio of cutoff length to correlation length at this time to be lambda/sub min//r/sub 0/ = 5.1; its expected value in a neutrino dominated universe is 4(..cap omega..h)/sup -1/ (H/sub 0/ = 100h km s/sup -1/ Mpc/sup -1/). At early epochs these models predict a negligible amplitude for xi(r) and could explain the lack of measurable clustering in the Ly..cap alpha.. absorption lines of high-redshift quasars. However, large-scale structure in our models collapses after z = 2. If this collapse precedes galaxy formation as in the usual pancake theory, galaxies formed uncomfortably recently. The extent of this problem may depend on the cosmological model used; the present series of experiments should be extended in the future to include models with ..cap omega..<1.

  1. LARGE-SCALE CO2 TRANSPORTATION AND DEEP OCEAN SEQUESTRATION

    Energy Technology Data Exchange (ETDEWEB)

    Hamid Sarv

    1999-03-01

    Technical and economical feasibility of large-scale CO{sub 2} transportation and ocean sequestration at depths of 3000 meters or grater was investigated. Two options were examined for transporting and disposing the captured CO{sub 2}. In one case, CO{sub 2} was pumped from a land-based collection center through long pipelines laid on the ocean floor. Another case considered oceanic tanker transport of liquid carbon dioxide to an offshore floating structure for vertical injection to the ocean floor. In the latter case, a novel concept based on subsurface towing of a 3000-meter pipe, and attaching it to the offshore structure was considered. Budgetary cost estimates indicate that for distances greater than 400 km, tanker transportation and offshore injection through a 3000-meter vertical pipe provides the best method for delivering liquid CO{sub 2} to deep ocean floor depressions. For shorter distances, CO{sub 2} delivery by parallel-laid, subsea pipelines is more cost-effective. Estimated costs for 500-km transport and storage at a depth of 3000 meters by subsea pipelines and tankers were 1.5 and 1.4 dollars per ton of stored CO{sub 2}, respectively. At these prices, economics of ocean disposal are highly favorable. Future work should focus on addressing technical issues that are critical to the deployment of a large-scale CO{sub 2} transportation and disposal system. Pipe corrosion, structural design of the transport pipe, and dispersion characteristics of sinking CO{sub 2} effluent plumes have been identified as areas that require further attention. Our planned activities in the next Phase include laboratory-scale corrosion testing, structural analysis of the pipeline, analytical and experimental simulations of CO{sub 2} discharge and dispersion, and the conceptual economic and engineering evaluation of large-scale implementation.

  2. Accurate emulators for large-scale computer experiments

    CERN Document Server

    Haaland, Ben; 10.1214/11-AOS929

    2012-01-01

    Large-scale computer experiments are becoming increasingly important in science. A multi-step procedure is introduced to statisticians for modeling such experiments, which builds an accurate interpolator in multiple steps. In practice, the procedure shows substantial improvements in overall accuracy, but its theoretical properties are not well established. We introduce the terms nominal and numeric error and decompose the overall error of an interpolator into nominal and numeric portions. Bounds on the numeric and nominal error are developed to show theoretically that substantial gains in overall accuracy can be attained with the multi-step approach.

  3. Large scale solar cooling plants in America, Asia and Europe

    Energy Technology Data Exchange (ETDEWEB)

    Holter, Christian; Olsacher, Nicole [S.O.L.I.D. GmbH, Graz (Austria)

    2010-07-01

    Large scale solar cooling plants with an area between 120 - 1600 m{sup 2} are representative examples to illustrate S.O.L.I.D.'s experiences. The selected three reference solar cooling plants are located on three different continents: America, Asia and Europe. Every region has different framework conditions and its unforeseen challenges but professional experience and innovative ideas form the basis that each plant is operating well and satisfying the customer's demand. This verifies that solar cooling already is a proven technology. (orig.)

  4. Simple Method for Large-Scale Fabrication of Plasmonic Structures

    CERN Document Server

    Makarov, Sergey V; Mukhin, Ivan S; Shishkin, Ivan I; Mozharov, Alexey M; Krasnok, Alexander E; Belov, Pavel A

    2015-01-01

    A novel method for single-step, lithography-free, and large-scale laser writing of nanoparticle-based plasmonic structures has been developed. Changing energy of femtosecond laser pulses and thickness of irradiated gold film it is possible to vary diameter of the gold nanoparticles, while the distance between them can be varied by laser scanning parameters. This method has an advantage over the most previously demonstrated methods in its simplicity and versatility, while the quality of the structures is good enough for many applications. In particular, resonant light absorbtion/scattering and surface-enhanced Raman scattering have been demonstrated on the fabricated nanostructures.

  5. Laser Welding of Large Scale Stainless Steel Aircraft Structures

    Science.gov (United States)

    Reitemeyer, D.; Schultz, V.; Syassen, F.; Seefeld, T.; Vollertsen, F.

    In this paper a welding process for large scale stainless steel structures is presented. The process was developed according to the requirements of an aircraft application. Therefore, stringers are welded on a skin sheet in a t-joint configuration. The 0.6 mm thickness parts are welded with a thin disc laser, seam length up to 1920 mm are demonstrated. The welding process causes angular distortions of the skin sheet which are compensated by a subsequent laser straightening process. Based on a model straightening process parameters matching the induced welding distortion are predicted. The process combination is successfully applied to stringer stiffened specimens.

  6. Large Scale Simulations of the Euler Equations on GPU Clusters

    KAUST Repository

    Liebmann, Manfred

    2010-08-01

    The paper investigates the scalability of a parallel Euler solver, using the Vijayasundaram method, on a GPU cluster with 32 Nvidia Geforce GTX 295 boards. The aim of this research is to enable large scale fluid dynamics simulations with up to one billion elements. We investigate communication protocols for the GPU cluster to compensate for the slow Gigabit Ethernet network between the GPU compute nodes and to maintain overall efficiency. A diesel engine intake-port and a nozzle, meshed in different resolutions, give good real world examples for the scalability tests on the GPU cluster. © 2010 IEEE.

  7. Large-Scale Graph Processing Using Apache Giraph

    KAUST Repository

    Sakr, Sherif

    2017-01-07

    This book takes its reader on a journey through Apache Giraph, a popular distributed graph processing platform designed to bring the power of big data processing to graph data. Designed as a step-by-step self-study guide for everyone interested in large-scale graph processing, it describes the fundamental abstractions of the system, its programming models and various techniques for using the system to process graph data at scale, including the implementation of several popular and advanced graph analytics algorithms.

  8. Facilitating dynamo action via control of large-scale turbulence.

    Science.gov (United States)

    Limone, A; Hatch, D R; Forest, C B; Jenko, F

    2012-12-01

    The magnetohydrodynamic dynamo effect is considered to be the major cause of magnetic field generation in geo- and astrophysical systems. Recent experimental and numerical results show that turbulence constitutes an obstacle to dynamos; yet its role in this context is not totally clear. Via numerical simulations, we identify large-scale turbulent vortices with a detrimental effect on the amplification of the magnetic field in a geometry of experimental interest and propose a strategy for facilitating the dynamo instability by manipulating these detrimental "hidden" dynamics.

  9. Generation of large-scale winds in horizontally anisotropic convection

    CERN Document Server

    von Hardenberg, J; Provenzale, A; Spiegel, E A

    2015-01-01

    We simulate three-dimensional, horizontally periodic Rayleigh-B\\'enard convection between free-slip horizontal plates, rotating about a horizontal axis. When both the temperature difference between the plates and the rotation rate are sufficiently large, a strong horizontal wind is generated that is perpendicular to both the rotation vector and the gravity vector. The wind is turbulent, large-scale, and vertically sheared. Horizontal anisotropy, engendered here by rotation, appears necessary for such wind generation. Most of the kinetic energy of the flow resides in the wind, and the vertical turbulent heat flux is much lower on average than when there is no wind.

  10. Active power reserves evaluation in large scale PVPPs

    DEFF Research Database (Denmark)

    Crăciun, Bogdan-Ionut; Kerekes, Tamas; Sera, Dezso

    2013-01-01

    contribute to the total amount of Frequency Containment Reserves (FCR) required by TSOs, reserves which are released during transients. To realize this PVPPs have to operate below their maximum available power and operate in Frequency Sensitive Mode (FSM). The reserve can also be used to fulfill future grid...... of the ancillary services have to be shared by the renewable plants. The main focus of the proposed paper is to technically and economically analyze the possibility of having active power reserves in large scale PV power plants (PVPPs) without any auxiliary storage equipment. The provided reserves should...... codes (GCs) requirements such as Power Ramp Limitation (PRL) during high slopes of irradiance....

  11. Inflation in de Sitter spacetime and CMB large scales anomaly

    CERN Document Server

    Zhao, Dong; Wang, Ping; Chang, Zhe

    2014-01-01

    The influence of cosmological constant type dark energy in the early universe is investigated. This is accommodated by a new dispersion relation in de Sitter spacetime. We perform a global fitting to explore the cosmological parameters space by using the CosmoMC package with the recently released Planck TT and WMAP Polarization datasets. Using the results from global fitting, we compute a new CMB temperature-temperature spectrum. The obtained TT spectrum has lower power compared with the one based on $\\Lambda$CDM model at large scales.

  12. Destruction of Be star disk by large scale magnetic fields

    Science.gov (United States)

    Ud-Doula, Asif; Owocki, Stanley P.; Kee, Nathaniel; Vanyo, Michael

    2017-01-01

    Classical Be stars are rapidly rotating stars with circumstellar disks that come and go on time scale of years. Recent observational data strongly suggests that these stars lack the ~10% incidence of global magnetic fields observed in other main-sequence B stars. Such an apparent lack of magnetic fields may indicate that Be disks are fundamentally incompatible with a significant large scale magnetic field. In this work, using numerical magnetohydrodynamics (MHD) simulations, we show that a dipole field of only 100G can lead to the quick disruption of a Be disk. Such a limit is in line with the observational upper limits for these objects.

  13. Application of methanol synthesis reactor to large-scale plants

    Institute of Scientific and Technical Information of China (English)

    LOU Ren; XU Rong-liang; LOU Shou-lin

    2006-01-01

    The developing status of world large-scale methanol production technology is analyzed and Linda's JW low-pressure methanol synthesis reactor with uniform temperature is described. JW serial reactors have been successfully introduced in and applied in Harbin Gasification Plant and the productivity has been increased by 50% and now nine sets of equipments are successfully running in Harbin Gasification Plant,Jiangsu Xinya, Shandong Kenli,Henan Zhongyuan, Handan Xinyangguang,' Shanxi Weihua and Inner Mongolia Tianye. Now it has manufacturing the reactors of 300,000 t/a for Liaoning Dahua. Some solutions for the structure problems of 1000 ~5000 t/d methanol synthesis rectors are put forward.

  14. Large-scale magnetic fields from inflation in teleparallel gravity

    CERN Document Server

    Bamba, Kazuharu; Luo, Ling-Wei

    2013-01-01

    Generation of large-scale magnetic fields in inflationary cosmology is studied in teleparallelism, where instead of the scalar curvature in general relativity, the torsion scalar describes the gravity theory. In particular, we investigate a coupling of the electromagnetic field to the torsion scalar during inflation, which leads to the breaking of conformal invariance of the electromagnetic field. We demonstrate that for a power-law type coupling, the current magnetic field strength of $\\sim 10^{-9}$ G on 1 Mpc scale can be generated, if the backreaction effects and strong coupling problem are not taken into consideration.

  15. Design techniques for large scale linear measurement systems

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J.V.

    1979-03-01

    Techniques to design measurement schemes for systems modeled by large scale linear time invariant systems, i.e., physical systems modeled by a large number (> 5) of ordinary differential equations, are described. The techniques are based on transforming the physical system model to a coordinate system facilitating the design and then transforming back to the original coordinates. An example of a three-stage, four-species, extraction column used in the reprocessing of spent nuclear fuel elements is presented. The basic ideas are briefly discussed in the case of noisy measurements. An example using a plutonium nitrate storage vessel (reprocessing) with measurement uncertainty is also presented.

  16. Enabling Large-Scale Biomedical Analysis in the Cloud

    Directory of Open Access Journals (Sweden)

    Ying-Chih Lin

    2013-01-01

    Full Text Available Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable.

  17. Search for Large Scale Anisotropies with the Pierre Auger Observatory

    Science.gov (United States)

    Bonino, R.; Pierre Auger Collaboration

    The Pierre Auger Observatory studies the nature and the origin of Ultra High Energy Cosmic Rays (>3\\cdot1018 eV). Completed at the end of 2008, it has been continuously operating for more than six years. Using data collected from 1 January 2004 until 31 March 2009, we search for large scale anisotropies with two complementary analyses in different energy windows. No significant anisotropies are observed, resulting in bounds on the first harmonic amplitude at the 1% level at EeV energies.

  18. Adiabatic hyperspherical approach to large-scale nuclear dynamics

    CERN Document Server

    Suzuki, Yasuyuki

    2015-01-01

    We formulate a fully microscopic approach to large-scale nuclear dynamics using a hyperradius as a collective coordinate. An adiabatic potential is defined by taking account of all possible configurations at a fixed hyperradius, and its hyperradius dependence plays a key role in governing the global nuclear motion. In order to go to larger systems beyond few-body systems, we suggest basis functions of a microscopic multicluster model, propose a method for calculating matrix elements of an adiabatic Hamiltonian with use of Fourier transforms, and test its effectiveness.

  19. Large scale PV plants - also in Denmark. Project report

    Energy Technology Data Exchange (ETDEWEB)

    Ahm, P. (PA Energy, Malling (Denmark)); Vedde, J. (SiCon. Silicon and PV consulting, Birkeroed (Denmark))

    2011-04-15

    Large scale PV (LPV) plants, plants with a capacity of more than 200 kW, has since 2007 constituted an increasing share of the global PV installations. In 2009 large scale PV plants with cumulative power more that 1,3 GWp were connected to the grid. The necessary design data for LPV plants in Denmark are available or can be found, although irradiance data could be improved. There seems to be very few institutional barriers for LPV projects, but as so far no real LPV projects have been processed, these findings have to be regarded as preliminary. The fast growing number of very large scale solar thermal plants for district heating applications supports these findings. It has further been investigated, how to optimize the lay-out of LPV plants. Under the Danish irradiance conditions with several winter months with very low solar height PV installations on flat surfaces will have to balance the requirements of physical space - and cost, and the loss of electricity production due to shadowing effects. The potential for LPV plants in Denmark are found in three main categories: PV installations on flat roof of large commercial buildings, PV installations on other large scale infrastructure such as noise barriers and ground mounted PV installations. The technical potential for all three categories is found to be significant and in the range of 50 - 250 km2. In terms of energy harvest PV plants will under Danish conditions exhibit an overall efficiency of about 10 % in conversion of the energy content of the light compared to about 0,3 % for biomass. The theoretical ground area needed to produce the present annual electricity consumption of Denmark at 33-35 TWh is about 300 km2 The Danish grid codes and the electricity safety regulations mention very little about PV and nothing about LPV plants. It is expected that LPV plants will be treated similarly to big wind turbines. A number of LPV plant scenarios have been investigated in detail based on real commercial offers and

  20. Clusters as cornerstones of large-scale structure.

    Science.gov (United States)

    Gottlöber, S.; Retzlaff, J.; Turchaninov, V.

    1997-04-01

    Galaxy clusters are one of the best tracers of large-scale structure in the Universe on scales well above 100 Mpc. The authors investigate here the clustering properties of a redshift sample of Abell/ACO clusters and compare the observational sample with mock samples constructed from N-body simulations on the basis of four different cosmological models. The authors discuss the power spectrum, the Minkowski functionals and the void statistics of these samples and conclude, that the SCDM and TCDM models are ruled out whereas the ACDM and BSI models are in agreement with the observational data.

  1. Structure and function of large-scale brain systems.

    Science.gov (United States)

    Koziol, Leonard F; Barker, Lauren A; Joyce, Arthur W; Hrin, Skip

    2014-01-01

    This article introduces the functional neuroanatomy of large-scale brain systems. Both the structure and functions of these brain networks are presented. All human behavior is the result of interactions within and between these brain systems. This system of brain function completely changes our understanding of how cognition and behavior are organized within the brain, replacing the traditional lesion model. Understanding behavior within the context of brain network interactions has profound implications for modifying abstract constructs such as attention, learning, and memory. These constructs also must be understood within the framework of a paradigm shift, which emphasizes ongoing interactions within a dynamically changing environment.

  2. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  3. Petascale computations for Large-scale Atomic and Molecular collisions

    CERN Document Server

    McLaughlin, Brendan M

    2014-01-01

    Petaflop architectures are currently being utilized efficiently to perform large scale computations in Atomic, Molecular and Optical Collisions. We solve the Schroedinger or Dirac equation for the appropriate collision problem using the R-matrix or R-matrix with pseudo-states approach. We briefly outline the parallel methodology used and implemented for the current suite of Breit-Pauli and DARC codes. Various examples are shown of our theoretical results compared with those obtained from Synchrotron Radiation facilities and from Satellite observations. We also indicate future directions and implementation of the R-matrix codes on emerging GPU architectures.

  4. Large-Scale Self-Consistent Nuclear Mass Calculations

    CERN Document Server

    Stoitsov, M V; Dobaczewski, J; Nazarewicz, W

    2006-01-01

    The program of systematic large-scale self-consistent nuclear mass calculations that is based on the nuclear density functional theory represents a rich scientific agenda that is closely aligned with the main research directions in modern nuclear structure and astrophysics, especially the radioactive nuclear beam physics. The quest for the microscopic understanding of the phenomenon of nuclear binding represents, in fact, a number of fundamental and crucial questions of the quantum many-body problem, including the proper treatment of correlations and dynamics in the presence of symmetry breaking. Recent advances and open problems in the field of nuclear mass calculations are presented and discussed.

  5. Development of Large-Scale Spacecraft Fire Safety Experiments

    DEFF Research Database (Denmark)

    Ruff, Gary A.; Urban, David L.; Fernandez-Pello, A. Carlos

    2013-01-01

    with the data downlinked at the conclusion of the test before the Cygnus vehicle reenters the atmosphere. Several computer modeling and ground-based experiment efforts will complement the flight experiment effort. The international topical team is collaborating with the NASA team in the definition...... of the spacecraft fire safety risk. The activity of this project is supported by an international topical team of fire experts from other space agencies who conduct research that is integrated into the overall experiment design. The large-scale space flight experiment will be conducted in an Orbital Sciences...

  6. Controlled growth of large-scale silver nanowires

    Institute of Scientific and Technical Information of China (English)

    Xiao Cong-Wen; Yang Hai-Tao; Shen Cheng-Min; Li Zi-An; Zhang Huai-Ruo; Liu Fei; Yang Tian-Zhong; Chen Shu-Tang; Gao Hong-Jun

    2005-01-01

    Large-scale silver nanowires with controlled aspect ratio were synthesized via reducing silver nitrate with 1, 2-propanediol in the presence of poly (vinyl pyrrolidone) (PVP). Scanning electron microscopy, transmission electron microscopy and x-ray powder diffraction were employed to characterize these silver nanowires. The diameter of the silver nanowires can be readily controlled in the range of 100 to 400 nm by varying the experimental conditions. X-ray photoelectron spectroscopy and Fourier transform infrared spectroscopy results show that there exists no chemical bond between the silver and the nitrogen atoms. The interaction between PVP and silver nanowires is mainly through the oxygen atom in the carbonyl group.

  7. Practical Optimal Control of Large-scale Water Distribution Network

    Institute of Scientific and Technical Information of China (English)

    Lv Mou(吕谋); Song Shuang

    2004-01-01

    According to the network characteristics and actual state of the water supply system in China, the implicit model, which can be solved by the hierarchical optimization method, was established. In special, based on the analyses of the water supply system containing variable-speed pumps, a software has been developed successfully. The application of this model to the city of Hangzhou (China) was compared to experiential strategy. The results of this study showed that the developed model is a promising optimization method to control the large-scale water supply systems.

  8. Large-scale biophysical evaluation of protein PEGylation effects

    DEFF Research Database (Denmark)

    Vernet, Erik; Popa, Gina; Pozdnyakova, Irina

    2016-01-01

    PEGylation is the most widely used method to chemically modify protein biopharmaceuticals, but surprisingly limited public data is available on the biophysical effects of protein PEGylation. Here we report the first large-scale study, with site-specific mono-PEGylation of 15 different proteins...... and characterization of 61 entities in total using a common set of analytical methods. Predictions of molecular size were typically accurate in comparison with actual size determined by size-exclusion chromatography (SEC) or dynamic light scattering (DLS). In contrast, there was no universal trend regarding the effect...

  9. Large scale obscuration and related climate effects open literature bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Russell, N.A.; Geitgey, J.; Behl, Y.K.; Zak, B.D.

    1994-05-01

    Large scale obscuration and related climate effects of nuclear detonations first became a matter of concern in connection with the so-called ``Nuclear Winter Controversy`` in the early 1980`s. Since then, the world has changed. Nevertheless, concern remains about the atmospheric effects of nuclear detonations, but the source of concern has shifted. Now it focuses less on global, and more on regional effects and their resulting impacts on the performance of electro-optical and other defense-related systems. This bibliography reflects the modified interest.

  10. Measuring Large-Scale Social Networks with High Resolution

    DEFF Research Database (Denmark)

    Stopczynski, Arkadiusz; Sekara, Vedran; Sapiezynski, Piotr

    2014-01-01

    , telecommunication, social networks, location, and background information (personality, demographics, health, politics) for a densely connected population of 1 000 individuals, using state-of-the-art smartphones as social sensors. Here we provide an overview of the related work and describe the motivation......This paper describes the deployment of a large-scale study designed to measure human interactions across a variety of communication channels, with high temporal resolution and spanning multiple years-the Copenhagen Networks Study. Specifically, we collect data on face-to-face interactions...

  11. An iterative decoupling solution method for large scale Lyapunov equations

    Science.gov (United States)

    Athay, T. M.; Sandell, N. R., Jr.

    1976-01-01

    A great deal of attention has been given to the numerical solution of the Lyapunov equation. A useful classification of the variety of solution techniques are the groupings of direct, transformation, and iterative methods. The paper summarizes those methods that are at least partly favorable numerically, giving special attention to two criteria: exploitation of a general sparse system matrix structure and efficiency in resolving the governing linear matrix equation for different matrices. An iterative decoupling solution method is proposed as a promising approach for solving large-scale Lyapunov equation when the system matrix exhibits a general sparse structure. A Fortran computer program that realizes the iterative decoupling algorithm is also discussed.

  12. An Atmospheric Large-Scale Cold Plasma Jet

    Institute of Scientific and Technical Information of China (English)

    吕晓桂; 任春生; 马腾才; 冯岩; 王德真

    2012-01-01

    This letter reports on the generation and characteristics of a large-scale dielectric barrier discharge plasma jet at atmospheric pressure. With appropriate parameters, diffuse plasma with a 50×5 mm2 cross-sectional area is obtained. The characteristics of the discharges are diag- nosed by using electrical and optical methods. In addition to being generated in helium, plasma is also generated in a mixed gas of helium and oxygen. The oxygen atomic radiant intensity (3p5P→ 3s5S, 3p3P→3s3S transition) is not proportional to the proportion of oxygen in the gas mixture, as shown by the experimental results.

  13. Quantum computation for large-scale image classification

    Science.gov (United States)

    Ruan, Yue; Chen, Hanwu; Tan, Jianing; Li, Xi

    2016-10-01

    Due to the lack of an effective quantum feature extraction method, there is currently no effective way to perform quantum image classification or recognition. In this paper, for the first time, a global quantum feature extraction method based on Schmidt decomposition is proposed. A revised quantum learning algorithm is also proposed that will classify images by computing the Hamming distance of these features. From the experimental results derived from the benchmark database Caltech 101, and an analysis of the algorithm, an effective approach to large-scale image classification is derived and proposed against the background of big data.

  14. The Saskatchewan River Basin - a large scale observatory for water security research (Invited)

    Science.gov (United States)

    Wheater, H. S.

    2013-12-01

    The 336,000 km2 Saskatchewan River Basin (SaskRB) in Western Canada illustrates many of the issues of Water Security faced world-wide. It poses globally-important science challenges due to the diversity in its hydro-climate and ecological zones. With one of the world's more extreme climates, it embodies environments of global significance, including the Rocky Mountains (source of the major rivers in Western Canada), the Boreal Forest (representing 30% of Canada's land area) and the Prairies (home to 80% of Canada's agriculture). Management concerns include: provision of water resources to more than three million inhabitants, including indigenous communities; balancing competing needs for water between different uses, such as urban centres, industry, agriculture, hydropower and environmental flows; issues of water allocation between upstream and downstream users in the three prairie provinces; managing the risks of flood and droughts; and assessing water quality impacts of discharges from major cities and intensive agricultural production. Superimposed on these issues is the need to understand and manage uncertain water futures, including effects of economic growth and environmental change, in a highly fragmented water governance environment. Key science questions focus on understanding and predicting the effects of land and water management and environmental change on water quantity and quality. To address the science challenges, observational data are necessary across multiple scales. This requires focussed research at intensively monitored sites and small watersheds to improve process understanding and fine-scale models. To understand large-scale effects on river flows and quality, land-atmosphere feedbacks, and regional climate, integrated monitoring, modelling and analysis is needed at large basin scale. And to support water management, new tools are needed for operational management and scenario-based planning that can be implemented across multiple scales and

  15. Exploring Cloud Computing for Large-scale Scientific Applications

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Guang; Han, Binh; Yin, Jian; Gorton, Ian

    2013-06-27

    This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address these challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.

  16. EVALUATING UNMANNED AERIAL PLATFORMS FOR CULTURAL HERITAGE LARGE SCALE MAPPING

    Directory of Open Access Journals (Sweden)

    A. Georgopoulos

    2016-06-01

    Full Text Available When it comes to large scale mapping of limited areas especially for cultural heritage sites, things become critical. Optical and non-optical sensors are developed to such sizes and weights that can be lifted by such platforms, like e.g. LiDAR units. At the same time there is an increase in emphasis on solutions that enable users to get access to 3D information faster and cheaper. Considering the multitude of platforms, cameras and the advancement of algorithms in conjunction with the increase of available computing power this challenge should and indeed is further investigated. In this paper a short review of the UAS technologies today is attempted. A discussion follows as to their applicability and advantages, depending on their specifications, which vary immensely. The on-board cameras available are also compared and evaluated for large scale mapping. Furthermore a thorough analysis, review and experimentation with different software implementations of Structure from Motion and Multiple View Stereo algorithms, able to process such dense and mostly unordered sequence of digital images is also conducted and presented. As test data set, we use a rich optical and thermal data set from both fixed wing and multi-rotor platforms over an archaeological excavation with adverse height variations and using different cameras. Dense 3D point clouds, digital terrain models and orthophotos have been produced and evaluated for their radiometric as well as metric qualities.

  17. Evaluating large scale orthophotos derived from high resolution satellite imagery

    Science.gov (United States)

    Ioannou, Maria Teresa; Georgopoulos, Andreas

    2013-08-01

    For the purposes of a research project, for the compilation of the archaeological and environmental digital map of the island of Antiparos, the production of updated large scale orthophotos was required. Hence suitable stereoscopic high resolution satellite imagery was acquired. Two Geoeye-1 stereopairs were enough to cover this small island of the Cyclades complex in the central Aegean. For the orientation of the two stereopairs numerous ground control points were determined using GPS observations. Some of them would also serve as check points. The images were processed using commercial stereophotogrammetric software suitable to process satellite stereoscopic imagery. The results of the orientations are evaluated and the digital terrain model was produced using automated and manual procedures. The DTM was checked both internally and externally with comparison to other available DTMs. In this paper the procedures for producing the desired orthophotography are critically presented and the final result is compared and evaluated for its accuracy, completeness and efficiency. The final product is also compared against the orthophotography produced by Ktimatologio S.A. using aerial images in 2007. The orthophotography produced has been evaluated metrically using the available check points, while qualitative evaluation has also been performed. The results are presented and a critical approach for the usability of satellite imagery for the production of large scale orthophotos is attempted.

  18. IP over optical multicasting for large-scale video delivery

    Science.gov (United States)

    Jin, Yaohui; Hu, Weisheng; Sun, Weiqiang; Guo, Wei

    2007-11-01

    In the IPTV systems, multicasting will play a crucial role in the delivery of high-quality video services, which can significantly improve bandwidth efficiency. However, the scalability and the signal quality of current IPTV can barely compete with the existing broadcast digital TV systems since it is difficult to implement large-scale multicasting with end-to-end guaranteed quality of service (QoS) in packet-switched IP network. China 3TNet project aimed to build a high performance broadband trial network to support large-scale concurrent streaming media and interactive multimedia services. The innovative idea of 3TNet is that an automatic switched optical networks (ASON) with the capability of dynamic point-to-multipoint (P2MP) connections replaces the conventional IP multicasting network in the transport core, while the edge remains an IP multicasting network. In this paper, we will introduce the network architecture and discuss challenges in such IP over Optical multicasting for video delivery.

  19. The effective field theory of cosmological large scale structures

    Energy Technology Data Exchange (ETDEWEB)

    Carrasco, John Joseph M. [Stanford Univ., Stanford, CA (United States); Hertzberg, Mark P. [Stanford Univ., Stanford, CA (United States); SLAC National Accelerator Lab., Menlo Park, CA (United States); Senatore, Leonardo [Stanford Univ., Stanford, CA (United States); SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2012-09-20

    Large scale structure surveys will likely become the next leading cosmological probe. In our universe, matter perturbations are large on short distances and small at long scales, i.e. strongly coupled in the UV and weakly coupled in the IR. To make precise analytical predictions on large scales, we develop an effective field theory formulated in terms of an IR effective fluid characterized by several parameters, such as speed of sound and viscosity. These parameters, determined by the UV physics described by the Boltzmann equation, are measured from N-body simulations. We find that the speed of sound of the effective fluid is c2s ≈ 10–6c2 and that the viscosity contributions are of the same order. The fluid describes all the relevant physics at long scales k and permits a manifestly convergent perturbative expansion in the size of the matter perturbations δ(k) for all the observables. As an example, we calculate the correction to the power spectrum at order δ(k)4. As a result, the predictions of the effective field theory are found to be in much better agreement with observation than standard cosmological perturbation theory, already reaching percent precision at this order up to a relatively short scale k ≃ 0.24h Mpc–1.

  20. The Impact of Large Scale Environments on Cluster Entropy Profiles

    Science.gov (United States)

    Trierweiler, Isabella; Su, Yuanyuan

    2017-01-01

    We perform a systematic analysis of 21 clusters imaged by the Suzaku satellite to determine the relation between the richness of cluster environments and entropy at large radii. Entropy profiles for clusters are expected to follow a power-law, but Suzaku observations show that the entropy profiles of many clusters are significantly flattened beyond 0.3 Rvir. While the entropy at the outskirts of clusters is thought to be highly dependent on the large scale cluster environment, the exact nature of the environment/entropy relation is unclear. Using the Sloan Digital Sky Survey and 6dF Galaxy Survey, we study the 20 Mpc large scale environment for all clusters in our sample. We find no strong relation between the entropy deviations at the virial radius and the total luminosity of the cluster surroundings, indicating that accretion and mergers have a more complex and indirect influence on the properties of the gas at large radii. We see a possible anti-correlation between virial temperature and richness of the cluster environment and find that density excess appears to play a larger role in the entropy flattening than temperature, suggesting that clumps of gas can lower entropy.

  1. Large Scale Cosmological Anomalies and Inhomogeneous Dark Energy

    Directory of Open Access Journals (Sweden)

    Leandros Perivolaropoulos

    2014-01-01

    Full Text Available A wide range of large scale observations hint towards possible modifications on the standard cosmological model which is based on a homogeneous and isotropic universe with a small cosmological constant and matter. These observations, also known as “cosmic anomalies” include unexpected Cosmic Microwave Background perturbations on large angular scales, large dipolar peculiar velocity flows of galaxies (“bulk flows”, the measurement of inhomogenous values of the fine structure constant on cosmological scales (“alpha dipole” and other effects. The presence of the observational anomalies could either be a large statistical fluctuation in the context of ΛCDM or it could indicate a non-trivial departure from the cosmological principle on Hubble scales. Such a departure is very much constrained by cosmological observations for matter. For dark energy however there are no significant observational constraints for Hubble scale inhomogeneities. In this brief review I discuss some of the theoretical models that can naturally lead to inhomogeneous dark energy, their observational constraints and their potential to explain the large scale cosmic anomalies.

  2. Solving Large Scale Structure in Ten Easy Steps with COLA

    CERN Document Server

    Tassev, Svetlin; Eisenstein, Daniel

    2013-01-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100Mpc/h with particles of mass ~5*10^9Msolar/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10^11Msolar/h. This is only at a modest speed penalty when compared to mocks obt...

  3. Large-scale magnetic fields in magnetohydrodynamic turbulence.

    Science.gov (United States)

    Alexakis, Alexandros

    2013-02-22

    High Reynolds number magnetohydrodynamic turbulence in the presence of zero-flux large-scale magnetic fields is investigated as a function of the magnetic field strength. For a variety of flow configurations, the energy dissipation rate [symbol: see text] follows the scaling [Symbol: see text] proportional U(rms)(3)/ℓ even when the large-scale magnetic field energy is twenty times larger than the kinetic energy. A further increase of the magnetic energy showed a transition to the [Symbol: see text] proportional U(rms)(2) B(rms)/ℓ scaling implying that magnetic shear becomes more efficient at this point at cascading the energy than the velocity fluctuations. Strongly helical configurations form nonturbulent helicity condensates that deviate from these scalings. Weak turbulence scaling was absent from the investigation. Finally, the magnetic energy spectra support the Kolmogorov spectrum k(-5/3) while kinetic energy spectra are closer to the Iroshnikov-Kraichnan spectrum k(-3/2) as observed in the solar wind.

  4. The large scale magnetic fields of thin accretion disks

    CERN Document Server

    Cao, Xinwu

    2013-01-01

    Large scale magnetic field threading an accretion disk is a key ingredient in the jet formation model. The most attractive scenario for the origin of such a large scale field is the advection of the field by the gas in the accretion disk from the interstellar medium or a companion star. However, it is realized that outward diffusion of the accreted field is fast compared to the inward accretion velocity in a geometrically thin accretion disk if the value of the Prandtl number Pm is around unity. In this work, we revisit this problem considering the angular momentum of the disk is removed predominantly by the magnetically driven outflows. The radial velocity of the disk is significantly increased due to the presence of the outflows. Using a simplified model for the vertical disk structure, we find that even moderately weak fields can cause sufficient angular momentum loss via a magnetic wind to balance outward diffusion. There are two equilibrium points, one at low field strengths corresponding to a plasma-bet...

  5. A Novel Approach Towards Large Scale Cross-Media Retrieval

    Institute of Scientific and Technical Information of China (English)

    Bo Lu; Guo-Ren Wang; Ye Yuan

    2012-01-01

    With the rapid development of Internet and multimedia technology,cross-media retrieval is concerned to retrieve all the related media objects with multi-modality by submitting a query media object.Unfortunately,the complexity and the heterogeneity of multi-modality have posed the following two major challenges for cross-media retrieval:1) how to construct a unified and compact model for media objects with multi-modality,2) how to improve the performance of retrieval for large scale cross-media database.In this paper,we propose a novel method which is dedicate to solving these issues to achieve effective and accurate cross-media retrieval.Firstly,a multi-modality semantic relationship graph (MSRG) is constructed using the semantic correlation amongst the media objects with multi-modality.Secondly,all the media objects in MSRG are mapped onto an isomorphic semantic space.Further,an efficient indexing MK-tree based on heterogeneous data distribution is proposed to manage the media objects within the semantic space and improve the performance of cross-media retrieval.Extensive experiments on real large scale cross-media datasets indicate that our proposal dramatically improves the accuracy and efficiency of cross-media retrieval,outperforming the existing methods significantly.

  6. Large Scale Organization of a Near Wall Turbulent Boundary Layer

    Science.gov (United States)

    Stanislas, Michel; Dekou Tiomajou, Raoul Florent; Foucaut, Jean Marc

    2016-11-01

    This study lies in the context of large scale coherent structures investigation in a near wall turbulent boundary layer. An experimental database at high Reynolds numbers (Re θ = 9830 and Re θ = 19660) was obtained in the LML wind tunnel with stereo-PIV at 4 Hz and hot wire anemometry at 30 kHz. A Linear Stochastic Estimation procedure, is used to reconstruct a 3 component field resolved in space and time. Algorithms were developed to extract coherent structures from the reconstructed field. A sample of 3D view of the structures is depicted in Figure 1. Uniform momentum regions are characterized with their mean hydraulic diameter in the YZ plane, their life time and their contribution to Reynolds stresses. The vortical motions are characterized by their position, radius, circulation and vorticity in addition to their life time and their number computed at a fixed position from the wall. The spatial organization of the structures was investigated through a correlation of their respective indicative functions in the spanwise direction. The simplified large scale model that arise is compared to the ones available in the literature. Streamwise low (green) and high (yellow) uniform momentum regions with positive (red) and negative (blue) vortical motions. This work was supported by Campus International pour la Sécurité et l'Intermodalité des Transports.

  7. Summarizing Large-Scale Database Schema Using Community Detection

    Institute of Scientific and Technical Information of China (English)

    Xue Wang; Xuan Zhou; Shan Wang

    2012-01-01

    Schema summarization on large-scale databases is a challenge.In a typical large database schema,a great proportion of the tables are closely connected through a few high degree tables.It is thus difficult to separate these tables into clusters that represent different topics.Moreover,as a schema can be very big,the schema summary needs to be structured into multiple levels,to further improve the usability.In this paper,we introduce a new schema summarization approach utilizing the techniques of community detection in social networks.Our approach contains three steps.First,we use a community detection algorithm to divide a database schema into subject groups,each representing a specific subject.Second,we cluster the subject groups into abstract domains to form a multi-level navigation structure.Third,we discover representative tables in each cluster to label the schema summary.We evaluate our approach on Freebase,a real world large-scale database.The results show that our approach can identify subject groups precisely.The generated abstract schema layers are very helpful for users to explore database.

  8. Large-scale columnar vortices in rotating turbulence

    Science.gov (United States)

    Yokoyama, Naoto; Takaoka, Masanori

    2016-11-01

    In the rotating turbulence, flow structures are affected by the angular velocity of the system's rotation. When the angular velocity is small, three-dimensional statistically-isotropic flow, which has the Kolmogorov spectrum all over the inertial subrange, is formed. When the angular velocity increases, the flow becomes two-dimensional anisotropic, and the energy spectrum has a power law k-2 in the small wavenumbers in addition to the Kolmogorov spectrum in the large wavenumbers. When the angular velocity decreases, the flow returns to the isotropic one. It is numerically found that the transition between the isotropic and anisotropic flows is hysteretic; the critical angular velocity at which the flow transitions from the anisotropic one to the isotropic one, and that of the reverse transition are different. It is also observed that the large-scale columnar structures in the anisotropic flow depends on the external force which maintains a statistically-steady state. In some cases, small-scale anticyclonic structures are aligned in a columnar structure apart from the cyclonic Taylor column. The formation mechanism of the large-scale columnar structures will be discussed. This work was partially supported by JSPS KAKENHI.

  9. Development of large-scale functional brain networks in children.

    Directory of Open Access Journals (Sweden)

    Kaustubh Supekar

    2009-07-01

    Full Text Available The ontogeny of large-scale functional organization of the human brain is not well understood. Here we use network analysis of intrinsic functional connectivity to characterize the organization of brain networks in 23 children (ages 7-9 y and 22 young-adults (ages 19-22 y. Comparison of network properties, including path-length, clustering-coefficient, hierarchy, and regional connectivity, revealed that although children and young-adults' brains have similar "small-world" organization at the global level, they differ significantly in hierarchical organization and interregional connectivity. We found that subcortical areas were more strongly connected with primary sensory, association, and paralimbic areas in children, whereas young-adults showed stronger cortico-cortical connectivity between paralimbic, limbic, and association areas. Further, combined analysis of functional connectivity with wiring distance measures derived from white-matter fiber tracking revealed that the development of large-scale brain networks is characterized by weakening of short-range functional connectivity and strengthening of long-range functional connectivity. Importantly, our findings show that the dynamic process of over-connectivity followed by pruning, which rewires connectivity at the neuronal level, also operates at the systems level, helping to reconfigure and rebalance subcortical and paralimbic connectivity in the developing brain. Our study demonstrates the usefulness of network analysis of brain connectivity to elucidate key principles underlying functional brain maturation, paving the way for novel studies of disrupted brain connectivity in neurodevelopmental disorders such as autism.

  10. The effect of large scale inhomogeneities on the luminosity distance

    Science.gov (United States)

    Brouzakis, Nikolaos; Tetradis, Nikolaos; Tzavara, Eleftheria

    2007-02-01

    We study the form of the luminosity distance as a function of redshift in the presence of large scale inhomogeneities, with sizes of order 10 Mpc or larger. We approximate the Universe through the Swiss-cheese model, with each spherical region described by the Lemaitre Tolman Bondi metric. We study the propagation of light beams in this background, assuming that the locations of the source and the observer are random. We derive the optical equations for the evolution of the beam area and shear. Through their integration we determine the configurations that can lead to an increase of the luminosity distance relative to the homogeneous cosmology. We find that this can be achieved if the Universe is composed of spherical void-like regions, with matter concentrated near their surface. For inhomogeneities consistent with the observed large scale structure, the relative increase of the luminosity distance is of the order of a few per cent at redshifts near 1, and falls short of explaining the substantial increase required by the supernova data. On the other hand, the effect we describe is important for the correct determination of the energy content of the Universe from observations.

  11. Large Scale and Performance tests of the ATLAS Online Software

    Institute of Scientific and Technical Information of China (English)

    Alexandrov; H.Wolters; 等

    2001-01-01

    One of the sub-systems of the Trigger/DAQ system of the future ATLAS experiment is the Online Software system.It encompasses the functionality needed to configure,control and monitor the DAQ.Its architecture is based on a component structure described in the ATLAS Trigger/DAQ technical proposal.Resular integration tests ensure its smooth operation in test beam setups during its evolutionary development towards the final ATLAS online system.Feedback is received and returned into the development process.Studies of the system.behavior have been performed on a set of up to 111 PCs on a configuration which is getting closer to the final size,Large scale and performance tests of the integrated system were performed on this setup with emphasis on investigating the aspects of the inter-dependence of the components and the performance of the communication software.Of particular interest were the run control state transitions in various configurations of the run control hierarchy.For the purpose of the tests,the software from other Trigger/DAQ sub-systems has been emulated.This paper presents a brief overview of the online system structure,its components and the large scale integration tests and their results.

  12. Modulation of energetic coherent motions by large-scale topography

    Science.gov (United States)

    Lai, Wing; Hamed, Ali M.; Troolin, Dan; Chamorro, Leonardo P.

    2016-11-01

    The distinctive characteristics and dynamics of the large-scale coherent motions induced over 2D and 3D large-scale wavy walls were explored experimentally with time-resolved volumetric PIV, and selected wall-normal high-resolution stereo PIV in a refractive-index-matching channel. The 2D wall consists of a sinusoidal wave in the streamwise direction with amplitude to wavelength ratio a/ λx = 0.05, while the 3D wall has an additional wave in the spanwise direction with a/ λy = 0.1. The ?ow was characterized at Re 8000, based on the bulk velocity and the channel half height. The walls are such that the amplitude to boundary layer thickness ratio is a/ δ99 0.1, which resemble geophysical-like topography. Insight on the dynamics of the coherent motions, Reynolds stress and spatial interaction of sweep and ejection events will be discussed in terms of the wall topography modulation.

  13. Large-scale mapping of mutations affecting zebrafish development

    Directory of Open Access Journals (Sweden)

    Neuhauss Stephan C

    2007-01-01

    Full Text Available Abstract Background Large-scale mutagenesis screens in the zebrafish employing the mutagen ENU have isolated several hundred mutant loci that represent putative developmental control genes. In order to realize the potential of such screens, systematic genetic mapping of the mutations is necessary. Here we report on a large-scale effort to map the mutations generated in mutagenesis screening at the Max Planck Institute for Developmental Biology by genome scanning with microsatellite markers. Results We have selected a set of microsatellite markers and developed methods and scoring criteria suitable for efficient, high-throughput genome scanning. We have used these methods to successfully obtain a rough map position for 319 mutant loci from the Tübingen I mutagenesis screen and subsequent screening of the mutant collection. For 277 of these the corresponding gene is not yet identified. Mapping was successful for 80 % of the tested loci. By comparing 21 mutation and gene positions of cloned mutations we have validated the correctness of our linkage group assignments and estimated the standard error of our map positions to be approximately 6 cM. Conclusion By obtaining rough map positions for over 300 zebrafish loci with developmental phenotypes, we have generated a dataset that will be useful not only for cloning of the affected genes, but also to suggest allelism of mutations with similar phenotypes that will be identified in future screens. Furthermore this work validates the usefulness of our methodology for rapid, systematic and inexpensive microsatellite mapping of zebrafish mutations.

  14. Evaluating Unmanned Aerial Platforms for Cultural Heritage Large Scale Mapping

    Science.gov (United States)

    Georgopoulos, A.; Oikonomou, C.; Adamopoulos, E.; Stathopoulou, E. K.

    2016-06-01

    When it comes to large scale mapping of limited areas especially for cultural heritage sites, things become critical. Optical and non-optical sensors are developed to such sizes and weights that can be lifted by such platforms, like e.g. LiDAR units. At the same time there is an increase in emphasis on solutions that enable users to get access to 3D information faster and cheaper. Considering the multitude of platforms, cameras and the advancement of algorithms in conjunction with the increase of available computing power this challenge should and indeed is further investigated. In this paper a short review of the UAS technologies today is attempted. A discussion follows as to their applicability and advantages, depending on their specifications, which vary immensely. The on-board cameras available are also compared and evaluated for large scale mapping. Furthermore a thorough analysis, review and experimentation with different software implementations of Structure from Motion and Multiple View Stereo algorithms, able to process such dense and mostly unordered sequence of digital images is also conducted and presented. As test data set, we use a rich optical and thermal data set from both fixed wing and multi-rotor platforms over an archaeological excavation with adverse height variations and using different cameras. Dense 3D point clouds, digital terrain models and orthophotos have been produced and evaluated for their radiometric as well as metric qualities.

  15. Large-scale Direct Targeting for Drug Repositioning and Discovery

    Science.gov (United States)

    Zheng, Chunli; Guo, Zihu; Huang, Chao; Wu, Ziyin; Li, Yan; Chen, Xuetong; Fu, Yingxue; Ru, Jinlong; Ali Shar, Piar; Wang, Yuan; Wang, Yonghua

    2015-01-01

    A system-level identification of drug-target direct interactions is vital to drug repositioning and discovery. However, the biological means on a large scale remains challenging and expensive even nowadays. The available computational models mainly focus on predicting indirect interactions or direct interactions on a small scale. To address these problems, in this work, a novel algorithm termed weighted ensemble similarity (WES) has been developed to identify drug direct targets based on a large-scale of 98,327 drug-target relationships. WES includes: (1) identifying the key ligand structural features that are highly-related to the pharmacological properties in a framework of ensemble; (2) determining a drug’s affiliation of a target by evaluation of the overall similarity (ensemble) rather than a single ligand judgment; and (3) integrating the standardized ensemble similarities (Z score) by Bayesian network and multi-variate kernel approach to make predictions. All these lead WES to predict drug direct targets with external and experimental test accuracies of 70% and 71%, respectively. This shows that the WES method provides a potential in silico model for drug repositioning and discovery. PMID:26155766

  16. Large scale petroleum reservoir simulation and parallel preconditioning algorithms research

    Institute of Scientific and Technical Information of China (English)

    SUN Jiachang; CAO Jianwen

    2004-01-01

    Solving large scale linear systems efficiently plays an important role in a petroleum reservoir simulator, and the key part is how to choose an effective parallel preconditioner. Properly choosing a good preconditioner has been beyond the pure algebraic field. An integrated preconditioner should include such components as physical background, characteristics of PDE mathematical model, nonlinear solving method, linear solving algorithm, domain decomposition and parallel computation. We first discuss some parallel preconditioning techniques, and then construct an integrated preconditioner, which is based on large scale distributed parallel processing, and reservoir simulation-oriented. The infrastructure of this preconditioner contains such famous preconditioning construction techniques as coarse grid correction, constraint residual correction and subspace projection correction. We essentially use multi-step means to integrate totally eight types of preconditioning components in order to give out the final preconditioner. Million-grid cell scale industrial reservoir data were tested on native high performance computers. Numerical statistics and analyses show that this preconditioner achieves satisfying parallel efficiency and acceleration effect.

  17. Using Large Scale Structure to test Multifield Inflation

    CERN Document Server

    Ferraro, Simone

    2014-01-01

    Primordial non-Gaussianity of local type is known to produce a scale-dependent contribution to the galaxy bias. Several classes of multi-field inflationary models predict non-Gaussian bias which is stochastic, in the sense that dark matter and halos don't trace each other perfectly on large scales. In this work, we forecast the ability of next-generation Large Scale Structure surveys to constrain common types of primordial non-Gaussianity like $f_{NL}$, $g_{NL}$ and $\\tau_{NL}$ using halo bias, including stochastic contributions. We provide fitting functions for statistical errors on these parameters which can be used for rapid forecasting or survey optimization. A next-generation survey with volume $V = 25 h^{-3}$Mpc$^3$, median redshift $z = 0.7$ and mean bias $b_g = 2.5$, can achieve $\\sigma(f_{NL}) = 6$, $\\sigma(g_{NL}) = 10^5$ and $\\sigma(\\tau_{NL}) = 10^3$ if no mass information is available. If halo masses are available, we show that optimally weighting the halo field in order to reduce sample variance...

  18. Simulating the Large-Scale Structure of HI Intensity Maps

    CERN Document Server

    Seehars, Sebastian; Witzemann, Amadeus; Refregier, Alexandre; Amara, Adam; Akeret, Joel

    2015-01-01

    Intensity mapping of neutral hydrogen (HI) is a promising observational probe of cosmology and large-scale structure. We present wide field simulations of HI intensity maps based on N-body simulations, the halo model, and a phenomenological prescription for assigning HI mass to halos. The simulations span a redshift range of 0.35 < z < 0.9 in redshift bins of width $\\Delta z \\approx 0.05$ and cover a quarter of the sky at an angular resolution of about 7'. We use the simulated intensity maps to study the impact of non-linear effects on the angular clustering of HI. We apply and compare several estimators for the angular power spectrum and its covariance. We verify that they agree with analytic predictions on large scales and study the validity of approximations based on Gaussian random fields, particularly in the context of the covariance. We discuss how our results and the simulated maps can be useful for planning and interpreting future HI intensity mapping surveys.

  19. Simulating the large-scale structure of HI intensity maps

    Science.gov (United States)

    Seehars, Sebastian; Paranjape, Aseem; Witzemann, Amadeus; Refregier, Alexandre; Amara, Adam; Akeret, Joel

    2016-03-01

    Intensity mapping of neutral hydrogen (HI) is a promising observational probe of cosmology and large-scale structure. We present wide field simulations of HI intensity maps based on N-body simulations of a 2.6 Gpc / h box with 20483 particles (particle mass 1.6 × 1011 Msolar / h). Using a conditional mass function to populate the simulated dark matter density field with halos below the mass resolution of the simulation (108 Msolar / h assign HI to those halos according to a phenomenological halo to HI mass relation. The simulations span a redshift range of 0.35 lesssim z lesssim 0.9 in redshift bins of width Δ z ≈ 0.05 and cover a quarter of the sky at an angular resolution of about 7'. We use the simulated intensity maps to study the impact of non-linear effects and redshift space distortions on the angular clustering of HI. Focusing on the autocorrelations of the maps, we apply and compare several estimators for the angular power spectrum and its covariance. We verify that these estimators agree with analytic predictions on large scales and study the validity of approximations based on Gaussian random fields, particularly in the context of the covariance. We discuss how our results and the simulated maps can be useful for planning and interpreting future HI intensity mapping surveys.

  20. High Speed Networking and Large-scale Simulation in Geodynamics

    Science.gov (United States)

    Kuang, Weijia; Gary, Patrick; Seablom, Michael; Truszkowski, Walt; Odubiyi, Jide; Jiang, Weiyuan; Liu, Dong

    2004-01-01

    Large-scale numerical simulation has been one of the most important approaches for understanding global geodynamical processes. In this approach, peta-scale floating point operations (pflops) are often required to carry out a single physically-meaningful numerical experiment. For example, to model convective flow in the Earth's core and generation of the geomagnetic field (geodynamo), simulation for one magnetic free-decay time (approximately 15000 years) with a modest resolution of 150 in three spatial dimensions would require approximately 0.2 pflops. If such a numerical model is used to predict geomagnetic secular variation over decades and longer, with e.g. an ensemble Kalman filter assimilation approach, approximately 30 (and perhaps more) independent simulations of similar scales would be needed for one data assimilation analysis. Obviously, such a simulation would require an enormous computing resource that exceeds the capacity of a single facility currently available at our disposal. One solution is to utilize a very fast network (e.g. 10Gb optical networks) and available middleware (e.g. Globus Toolkit) to allocate available but often heterogeneous resources for such large-scale computing efforts. At NASA GSFC, we are experimenting with such an approach by networking several clusters for geomagnetic data assimilation research. We shall present our initial testing results in the meeting.

  1. Halo detection via large-scale Bayesian inference

    Science.gov (United States)

    Merson, Alexander I.; Jasche, Jens; Abdalla, Filipe B.; Lahav, Ofer; Wandelt, Benjamin; Jones, D. Heath; Colless, Matthew

    2016-08-01

    We present a proof-of-concept of a novel and fully Bayesian methodology designed to detect haloes of different masses in cosmological observations subject to noise and systematic uncertainties. Our methodology combines the previously published Bayesian large-scale structure inference algorithm, HAmiltonian Density Estimation and Sampling algorithm (HADES), and a Bayesian chain rule (the Blackwell-Rao estimator), which we use to connect the inferred density field to the properties of dark matter haloes. To demonstrate the capability of our approach, we construct a realistic galaxy mock catalogue emulating the wide-area 6-degree Field Galaxy Survey, which has a median redshift of approximately 0.05. Application of HADES to the catalogue provides us with accurately inferred three-dimensional density fields and corresponding quantification of uncertainties inherent to any cosmological observation. We then use a cosmological simulation to relate the amplitude of the density field to the probability of detecting a halo with mass above a specified threshold. With this information, we can sum over the HADES density field realisations to construct maps of detection probabilities and demonstrate the validity of this approach within our mock scenario. We find that the probability of successful detection of haloes in the mock catalogue increases as a function of the signal to noise of the local galaxy observations. Our proposed methodology can easily be extended to account for more complex scientific questions and is a promising novel tool to analyse the cosmic large-scale structure in observations.

  2. Large-Scale Mass Distribution in the Illustris-Simulation

    CERN Document Server

    Haider, Markus; Vogelsberger, Mark; Genel, Shy; Springel, Volker; Torrey, Paul; Hernquist, Lars

    2015-01-01

    Observations at low redshifts thus far fail to account for all of the baryons expected in the Universe according to cosmological constraints. A large fraction of the baryons presumably resides in a thin and warm-hot medium between the galaxies, where they are difficult to observe due to their low densities and high temperatures. Cosmological simulations of structure formation can be used to verify this picture and provide quantitative predictions for the distribution of mass in different large-scale structure components. Here we study the distribution of baryons and dark matter at different epochs using data from the Illustris Simulation. We identify regions of different dark matter density with the primary constituents of large-scale structure, allowing us to measure mass and volume of haloes, filaments and voids. At redshift zero, we find that 49 % of the dark matter and 23 % of the baryons are within haloes. The filaments of the cosmic web host a further 45 % of the dark matter and 46 % of the baryons. The...

  3. Modelling large-scale halo bias using the bispectrum

    CERN Document Server

    Pollack, Jennifer E; Porciani, Cristiano

    2011-01-01

    We study the relation between the halo and matter density fields -- commonly termed bias -- in the LCDM framework. In particular, we examine the local model of biasing at quadratic order in matter density. This model is characterized by parameters b_1 and b_2. Using an ensemble of N-body simulations, we apply several statistical methods to estimate the parameters. We measure halo and matter fluctuations smoothed on various scales and find that the parameters vary with smoothing scale. We argue that, for real-space measurements, owing to the mixing of wavemodes, no scale can be found for which the parameters are independent of smoothing. However, this is not the case in Fourier space. We measure halo power spectra and construct estimates for an effective large-scale bias. We measure the configuration dependence of the halo bispectra B_hhh and reduced bispectra Q_hhh for very large-scale k-space triangles. From this we constrain b_1 and b_2. Using the lowest-order perturbation theory, we find that for B_hhh the...

  4. Large-scale climatic control on European precipitation

    Science.gov (United States)

    Lavers, David; Prudhomme, Christel; Hannah, David

    2010-05-01

    Precipitation variability has a significant impact on society. Sectors such as agriculture and water resources management are reliant on predictable and reliable precipitation supply with extreme variability having potentially adverse socio-economic impacts. Therefore, understanding the climate drivers of precipitation is of human relevance. This research examines the strength, location and seasonality of links between precipitation and large-scale Mean Sea Level Pressure (MSLP) fields across Europe. In particular, we aim to evaluate whether European precipitation is correlated with the same atmospheric circulation patterns or if there is a strong spatial and/or seasonal variation in the strength and location of centres of correlations. The work exploits time series of gridded ERA-40 MSLP on a 2.5˚×2.5˚ grid (0˚N-90˚N and 90˚W-90˚E) and gridded European precipitation from the Ensemble project on a 0.5°×0.5° grid (36.25˚N-74.25˚N and 10.25˚W-24.75˚E). Monthly Spearman rank correlation analysis was performed between MSLP and precipitation. During winter, a significant MSLP-precipitation correlation dipole pattern exists across Europe. Strong negative (positive) correlation located near the Icelandic Low and positive (negative) correlation near the Azores High pressure centres are found in northern (southern) Europe. These correlation dipoles resemble the structure of the North Atlantic Oscillation (NAO). The reversal in the correlation dipole patterns occurs at the latitude of central France, with regions to the north (British Isles, northern France, Scandinavia) having a positive relationship with the NAO, and regions to the south (Italy, Portugal, southern France, Spain) exhibiting a negative relationship with the NAO. In the lee of mountain ranges of eastern Britain and central Sweden, correlation with North Atlantic MSLP is reduced, reflecting a reduced influence of westerly flow on precipitation generation as the mountains act as a barrier to moist

  5. Creating restoration landscapes: partnerships in large-scale conservation in the UK

    Directory of Open Access Journals (Sweden)

    William M. Adams

    2016-09-01

    Full Text Available It is increasingly recognized that ecological restoration demands conservation action beyond the borders of existing protected areas. This requires the coordination of land uses and management over a larger area, usually with a range of partners, which presents novel institutional challenges for conservation planners. Interviews were undertaken with managers of a purposive sample of large-scale conservation areas in the UK. Interviews were open-ended and analyzed using standard qualitative methods. Results show a wide variety of organizations are involved in large-scale conservation projects, and that partnerships take time to create and demand resilience in the face of different organizational practices, staff turnover, and short-term funding. Successful partnerships with local communities depend on the establishment of trust and the availability of external funds to support conservation land uses. We conclude that there is no single institutional model for large-scale conservation: success depends on finding institutional strategies that secure long-term conservation outcomes, and ensure that conservation gains are not reversed when funding runs out, private owners change priorities, or land changes hands.

  6. A Decentralized Multivariable Robust Adaptive Voltage and Speed Regulator for Large-Scale Power Systems

    Science.gov (United States)

    Okou, Francis A.; Akhrif, Ouassima; Dessaint, Louis A.; Bouchard, Derrick

    2013-05-01

    This papter introduces a decentralized multivariable robust adaptive voltage and frequency regulator to ensure the stability of large-scale interconnnected generators. Interconnection parameters (i.e. load, line and transormer parameters) are assumed to be unknown. The proposed design approach requires the reformulation of conventiaonal power system models into a multivariable model with generator terminal voltages as state variables, and excitation and turbine valve inputs as control signals. This model, while suitable for the application of modern control methods, introduces problems with regards to current design techniques for large-scale systems. Interconnection terms, which are treated as perturbations, do not meet the common matching condition assumption. A new adaptive method for a certain class of large-scale systems is therefore introduces that does not require the matching condition. The proposed controller consists of nonlinear inputs that cancel some nonlinearities of the model. Auxiliary controls with linear and nonlinear components are used to stabilize the system. They compensate unknown parametes of the model by updating both the nonlinear component gains and excitation parameters. The adaptation algorithms involve the sigma-modification approach for auxiliary control gains, and the projection approach for excitation parameters to prevent estimation drift. The computation of the matrix-gain of the controller linear component requires the resolution of an algebraic Riccati equation and helps to solve the perturbation-mismatching problem. A realistic power system is used to assess the proposed controller performance. The results show that both stability and transient performance are considerably improved following a severe contingency.

  7. The Genetic Etiology of Tourette Syndrome: Large-Scale Collaborative Efforts on the Precipice of Discovery.

    Science.gov (United States)

    Georgitsi, Marianthi; Willsey, A Jeremy; Mathews, Carol A; State, Matthew; Scharf, Jeremiah M; Paschou, Peristera

    2016-01-01

    Gilles de la Tourette Syndrome (TS) is a childhood-onset neurodevelopmental disorder that is characterized by multiple motor and phonic tics. It has a complex etiology with multiple genes likely interacting with environmental factors to lead to the onset of symptoms. The genetic basis of the disorder remains elusive. However, multiple resources and large-scale projects are coming together, launching a new era in the field and bringing us on the verge of discovery. The large-scale efforts outlined in this report are complementary and represent a range of different approaches to the study of disorders with complex inheritance. The Tourette Syndrome Association International Consortium for Genetics (TSAICG) has focused on large families, parent-proband trios and cases for large case-control designs such as genomewide association studies (GWAS), copy number variation (CNV) scans, and exome/genome sequencing. TIC Genetics targets rare, large effect size mutations in simplex trios, and multigenerational families. The European Multicentre Tics in Children Study (EMTICS) seeks to elucidate gene-environment interactions including the involvement of infection and immune mechanisms in TS etiology. Finally, TS-EUROTRAIN, a Marie Curie Initial Training Network, aims to act as a platform to unify large-scale projects in the field and to educate the next generation of experts. Importantly, these complementary large-scale efforts are joining forces to uncover the full range of genetic variation and environmental risk factors for TS, holding great promise for identifying definitive TS susceptibility genes and shedding light into the complex pathophysiology of this disorder.

  8. The genetic etiology of Tourette Syndrome: Large-scale collaborative efforts on the precipice of discovery

    Directory of Open Access Journals (Sweden)

    Marianthi Georgitsi

    2016-08-01

    Full Text Available Gilles de la Tourette Syndrome (TS is a childhood-onset neurodevelopmental disorder that is characterized by multiple motor and phonic tics. It has a complex etiology with multiple genes likely interacting with environmental factors to lead to the onset of symptoms. The genetic basis of the disorder remains elusive;however, multiple resources and large-scale projects are coming together, launching a new era in the field and bringing us on the verge of discovery. The large-scale efforts outlined in this report, are complementary and represent a range of different approaches to the study of disorders with complex inheritance. The Tourette Syndrome Association International Consortium for Genetics (TSAICG has focused on large families, parent-proband trios and cases for large case-control designs such as genomewide association studies (GWAS, copy number variation (CNV scans and exome/genome sequencing. TIC Genetics targets rare, large effect size mutations in simplex trios and multigenerational families. The European Multicentre Tics in Children Study (EMTICS seeks to elucidate gene-environment interactions including the involvement of infection and immune mechanisms in TS etiology. Finally, TS-EUROTRAIN, a Marie Curie Initial Training Network, aims to act as a platform to unify large-scale projects in the field and to educate the next generation of experts. Importantly, these complementary large-scale efforts are joining forces to uncover the full range of genetic variation and environmental risk factors for TS, holding great promise for indentifying definitive TS susceptibility genes and shedding light into the complex pathophysiology of this disorder.

  9. Advection/diffusion of large scale magnetic field in accretion disks

    Directory of Open Access Journals (Sweden)

    R. V. E. Lovelace

    2009-02-01

    Full Text Available Activity of the nuclei of galaxies and stellar mass systems involving disk accretion to black holes is thought to be due to (1 a small-scale turbulent magnetic field in the disk (due to the magneto-rotational instability or MRI which gives a large viscosity enhancing accretion, and (2 a large-scale magnetic field which gives rise to matter outflows and/or electromagnetic jets from the disk which also enhances accretion. An important problem with this picture is that the enhanced viscosity is accompanied by an enhanced magnetic diffusivity which acts to prevent the build up of a significant large-scale field. Recent work has pointed out that the disk's surface layers are non-turbulent and thus highly conducting (or non-diffusive because the MRI is suppressed high in the disk where the magnetic and radiation pressures are larger than the thermal pressure. Here, we calculate the vertical (z profiles of the stationary accretion flows (with radial and azimuthal components, and the profiles of the large-scale, magnetic field taking into account the turbulent viscosity and diffusivity due to the MRI and the fact that the turbulence vanishes at the surface of the disk. We derive a sixth-order differential equation for the radial flow velocity vr(z which depends mainly on the midplane thermal to magnetic pressure ratio β>1 and the Prandtl number of the turbulence P=viscosity/diffusivity. Boundary conditions at the disk surface take into account a possible magnetic wind or jet and allow for a surface current in the highly conducting surface layer. The stationary solutions we find indicate that a weak (β>1 large-scale field does not diffuse away as suggested by earlier work.

  10. JSTOR: Large Scale Digitization of Journals in the United States

    OpenAIRE

    Kevin M. Guthrie

    1999-01-01

    The JSTOR database now includes well over 2 million pages from 61 important journals in 13 academic disciplines. Additional journal content is being digitized at a rate of more than 100,000 pages per month. More than 320 libraries in the United States and Canada have become participating institutions, providing support for the creation, maintenance and growth of this database. Outside of North America, we have established a mirror site in the United Kingdom. Through a novel collaborative rela...

  11. The predictability of large-scale wind-driven flows

    Directory of Open Access Journals (Sweden)

    A. Mahadevan

    2001-01-01

    Full Text Available The singular values associated with optimally growing perturbations to stationary and time-dependent solutions for the general circulation in an ocean basin provide a measure of the rate at which solutions with nearby initial conditions begin to diverge, and hence, a measure of the predictability of the flow. In this paper, the singular vectors and singular values of stationary and evolving examples of wind-driven, double-gyre circulations in different flow regimes are explored. By changing the Reynolds number in simple quasi-geostrophic models of the wind-driven circulation, steady, weakly aperiodic and chaotic states may be examined. The singular vectors of the steady state reveal some of the physical mechanisms responsible for optimally growing perturbations. In time-dependent cases, the dominant singular values show significant variability in time, indicating strong variations in the predictability of the flow. When the underlying flow is weakly aperiodic, the dominant singular values co-vary with integral measures of the large-scale flow, such as the basin-integrated upper ocean kinetic energy and the transport in the western boundary current extension. Furthermore, in a reduced gravity quasi-geostrophic model of a weakly aperiodic, double-gyre flow, the behaviour of the dominant singular values may be used to predict a change in the large-scale flow, a feature not shared by an analogous two-layer model. When the circulation is in a strongly aperiodic state, the dominant singular values no longer vary coherently with integral measures of the flow. Instead, they fluctuate in a very aperiodic fashion on mesoscale time scales. The dominant singular vectors then depend strongly on the arrangement of mesoscale features in the flow and the evolved forms of the associated singular vectors have relatively short spatial scales. These results have several implications. In weakly aperiodic, periodic, and stationary regimes, the mesoscale energy

  12. Large-Scale Graphene Film Deposition for Monolithic Device Fabrication

    Science.gov (United States)

    Al-shurman, Khaled

    Since 1958, the concept of integrated circuit (IC) has achieved great technological developments and helped in shrinking electronic devices. Nowadays, an IC consists of more than a million of compacted transistors. The majority of current ICs use silicon as a semiconductor material. According to Moore's law, the number of transistors built-in on a microchip can be double every two years. However, silicon device manufacturing reaches its physical limits. To explain, there is a new trend to shrinking circuitry to seven nanometers where a lot of unknown quantum effects such as tunneling effect can not be controlled. Hence, there is an urgent need for a new platform material to replace Si. Graphene is considered a promising material with enormous potential applications in many electronic and optoelectronics devices due to its superior properties. There are several techniques to produce graphene films. Among these techniques, chemical vapor deposition (CVD) offers a very convenient method to fabricate films for large-scale graphene films. Though CVD method is suitable for large area growth of graphene, the need for transferring a graphene film to silicon-based substrates is required. Furthermore, the graphene films thus achieved are, in fact, not single crystalline. Also, graphene fabrication utilizing Cu and Ni at high growth temperature contaminates the substrate that holds Si CMOS circuitry and CVD chamber as well. So, lowering the deposition temperature is another technological milestone for the successful adoption of graphene in integrated circuits fabrication. In this research, direct large-scale graphene film fabrication on silicon based platform (i.e. SiO2 and Si3N4) at low temperature was achieved. With a focus on low-temperature graphene growth, hot-filament chemical vapor deposition (HF-CVD) was utilized to synthesize graphene film using 200 nm thick nickel film. Raman spectroscopy was utilized to examine graphene formation on the bottom side of the Ni film

  13. Climatological context for large-scale coral bleaching

    Science.gov (United States)

    Barton, A. D.; Casey, K. S.

    2005-12-01

    Large-scale coral bleaching was first observed in 1979 and has occurred throughout virtually all of the tropics since that time. Severe bleaching may result in the loss of live coral and in a decline of the integrity of the impacted coral reef ecosystem. Despite the extensive scientific research and increased public awareness of coral bleaching, uncertainties remain about the past and future of large-scale coral bleaching. In order to reduce these uncertainties and place large-scale coral bleaching in the longer-term climatological context, specific criteria and methods for using historical sea surface temperature (SST) data to examine coral bleaching-related thermal conditions are proposed by analyzing three, 132 year SST reconstructions: ERSST, HadISST1, and GISST2.3b. These methodologies are applied to case studies at Discovery Bay, Jamaica (77.27°W, 18.45°N), Sombrero Reef, Florida, USA (81.11°W, 24.63°N), Academy Bay, Galápagos, Ecuador (90.31°W, 0.74°S), Pearl and Hermes Reef, Northwest Hawaiian Islands, USA (175.83°W, 27.83°N), Midway Island, Northwest Hawaiian Islands, USA (177.37°W, 28.25°N), Davies Reef, Australia (147.68°E, 18.83°S), and North Male Atoll, Maldives (73.35°E, 4.70°N). The results of this study show that (1) The historical SST data provide a useful long-term record of thermal conditions in reef ecosystems, giving important insight into the thermal history of coral reefs and (2) While coral bleaching and anomalously warm SSTs have occurred over much of the world in recent decades, case studies in the Caribbean, Northwest Hawaiian Islands, and parts of other regions such as the Great Barrier Reef exhibited SST conditions and cumulative thermal stress prior to 1979 that were comparable to those conditions observed during the strong, frequent coral bleaching events since 1979. This climatological context and knowledge of past environmental conditions in reef ecosystems may foster a better understanding of how coral reefs will

  14. Probing large-scale structure with radio observations

    Science.gov (United States)

    Brown, Shea D.

    This thesis focuses on detecting magnetized relativistic plasma in the intergalactic medium (IGM) of filamentary large-scale structure (LSS) by observing synchrotron emission emitted by structure formation shocks. Little is known about the IGM beyond the largest clusters of galaxies, and synchrotron emission holds enormous promise as a means of probing magnetic fields and relativistic particle populations in these low density regions. I'll first report on observations taken at the Very Large Array and the Westerbork Synthesis Radio Telescope of the diffuse radio source 0809+39. I use these observations to demonstrate that 0809+39 is likely the first "radio relic" discovered that is not associated with a rich |"X-ray emitting cluster of galaxies. I then demonstrate that an unconventional reprocessing of the NVSS polarization survey can reveal structures on scales from 15' to hundreds of degrees, far larger than the nominal shortest-baseline scale. This yields hundreds of new diffuse sources as well as the identification of a new nearby galactic loop . These observations also highlight the major obstacle that diffuse galactic foreground emission poses for any search for large-scale, low surface- brightness extragalactic emission. I therefore explore the cross-correlation of diffuse radio emission with optical tracers of LSS as a means of statistically detecting the presence of magnetic fields in the low-density regions of the cosmic web. This initial study with the Bonn 1.4 GHz radio survey yields an upper limit of 0.2 mG for large-scale filament magnetic fields. Finally, I report on new Green Bank Telescope and Westerbork Synthesis Radio Telescope observations of the famous Coma cluster of galaxies. Major findings include an extension to the Coma cluster radio relic source 1253+275 which makes its total extent ~2 Mpc, as well as a sharp edge, or "front", on the Western side of the radio halo which shows a strong correlation with merger activity associated with an

  15. Solving large scale traveling salesman problems by chaotic neurodynamics.

    Science.gov (United States)

    Hasegawa, Mikio; Ikeguch, Tohru; Aihara, Kazuyuki

    2002-03-01

    We propose a novel approach for solving large scale traveling salesman problems (TSPs) by chaotic dynamics. First, we realize the tabu search on a neural network, by utilizing the refractory effects as the tabu effects. Then, we extend it to a chaotic neural network version. We propose two types of chaotic searching methods, which are based on two different tabu searches. While the first one requires neurons of the order of n2 for an n-city TSP, the second one requires only n neurons. Moreover, an automatic parameter tuning method of our chaotic neural network is presented for easy application to various problems. Last, we show that our method with n neurons is applicable to large TSPs such as an 85,900-city problem and exhibits better performance than the conventional stochastic searches and the tabu searches.

  16. Large scale land use cartography of special areas

    Energy Technology Data Exchange (ETDEWEB)

    Amico, F.D.; Maccarone, D.; Pandiscia, G.V. [NuovaTelespazio S.p.A., Rome (Italy)] [and others

    1996-11-01

    On 06 October 1993 an aerial remote sensing mission has been done on the {open_quote}Mounts of the Sila{close_quotes} area, using a DAEDALUS ATM multispectral scanner, in the framework of the TELAER project, supported by I.A.S.M. (Istituto per l`Assistenza e lo Sviluppo del Mezzogiorno). The study area is inside the National Park of Calabria, well known for its coniferous forests. The collected imagery were used to produce a large scale land use cartography, on the scale of 1 to 5000, extracting information on natural and anthropical vegetation from the multispectral images, with the aid of stereo photos acquired simultaneously. 5 refs., 1 fig., 1 tab.

  17. Statistics of Caustics in Large-Scale Structure Formation

    CERN Document Server

    Feldbrugge, Job; van de Weygaert, Rien

    2014-01-01

    The cosmic web is a complex spatial pattern of walls, filaments, cluster nodes and underdense void regions. It emerged through gravitational amplification from the Gaussian primordial density field. Here we infer analytical expressions for the spatial statistics of caustics in the evolving large-scale mass distribution. In our analysis, following the quasi-linear Zeldovich formalism and confined to the 1D and 2D situation, we compute number density and correlation properties of caustics in cosmic density fields that evolve from Gaussian primordial conditions. The analysis can be straightforwardly extended to the 3D situation. We moreover, are currently extending the approach to the non-linear regime of structure formation by including higher order Lagrangian approximations and Lagrangian effective field theory.

  18. Building a Large-Scale Knowledge Base for Machine Translation

    CERN Document Server

    Knight, K; Knight, Kevin; Luk, Steve K.

    1994-01-01

    Knowledge-based machine translation (KBMT) systems have achieved excellent results in constrained domains, but have not yet scaled up to newspaper text. The reason is that knowledge resources (lexicons, grammar rules, world models) must be painstakingly handcrafted from scratch. One of the hypotheses being tested in the PANGLOSS machine translation project is whether or not these resources can be semi-automatically acquired on a very large scale. This paper focuses on the construction of a large ontology (or knowledge base, or world model) for supporting KBMT. It contains representations for some 70,000 commonly encountered objects, processes, qualities, and relations. The ontology was constructed by merging various online dictionaries, semantic networks, and bilingual resources, through semi-automatic methods. Some of these methods (e.g., conceptual matching of semantic taxonomies) are broadly applicable to problems of importing/exporting knowledge from one KB to another. Other methods (e.g., bilingual match...

  19. Coordinated SLNR based Precoding in Large-Scale Heterogeneous Networks

    KAUST Repository

    Boukhedimi, Ikram

    2017-03-06

    This work focuses on the downlink of large-scale two-tier heterogeneous networks composed of a macro-cell overlaid by micro-cell networks. Our interest is on the design of coordinated beamforming techniques that allow to mitigate the inter-cell interference. Particularly, we consider the case in which the coordinating base stations (BSs) have imperfect knowledge of the channel state information. Under this setting, we propose a regularized SLNR based precoding design in which the regularization factor is used to allow better resilience with respect to the channel estimation errors. Based on tools from random matrix theory, we provide an analytical analysis of the SINR and SLNR performances. These results are then exploited to propose a proper setting of the regularization factor. Simulation results are finally provided in order to validate our findings and to confirm the performance of the proposed precoding scheme.

  20. Unfolding large-scale online collaborative human dynamics

    CERN Document Server

    Zha, Yilong; Zhou, Changsong

    2015-01-01

    Large-scale interacting human activities underlie all social and economic phenomena, but quantitative understanding of regular patterns and mechanism is very challenging and still rare. Self-organized online collaborative activities with precise record of event timing provide unprecedented opportunity. Our empirical analysis of the history of millions of updates in Wikipedia shows a universal double power-law distribution of time intervals between consecutive updates of an article. We then propose a generic model to unfold collaborative human activities into three modules: (i) individual behavior characterized by Poissonian initiation of an action, (ii) human interaction captured by a cascading response to others with a power-law waiting time, and (iii) population growth due to increasing number of interacting individuals. This unfolding allows us to obtain analytical formula that is fully supported by the universal patterns in empirical data. Our modeling approaches reveal "simplicity" beyond complex interac...

  1. Modeling of large-scale oxy-fuel combustion processes

    DEFF Research Database (Denmark)

    Yin, Chungen

    2012-01-01

    Quite some studies have been conducted in order to implement oxy-fuel combustion with flue gas recycle in conventional utility boilers as an effective effort of carbon capture and storage. However, combustion under oxy-fuel conditions is significantly different from conventional air-fuel firing......, among which radiative heat transfer under oxy-fuel conditions is one of the fundamental issues. This paper demonstrates the nongray-gas effects in modeling of large-scale oxy-fuel combustion processes. Oxy-fuel combustion of natural gas in a 609MW utility boiler is numerically studied, in which...... calculation of the oxy-fuel WSGGM remarkably over-predicts the radiative heat transfer to the furnace walls and under-predicts the gas temperature at the furnace exit plane, which also result in a higher incomplete combustion in the gray calculation. Moreover, the gray and non-gray calculations of the same...

  2. Geophysical mapping of complex glaciogenic large-scale structures

    DEFF Research Database (Denmark)

    Høyer, Anne-Sophie

    2013-01-01

    This thesis presents the main results of a four year PhD study concerning the use of geophysical data in geological mapping. The study is related to the Geocenter project, “KOMPLEKS”, which focuses on the mapping of complex, large-scale geological structures. The study area is approximately 100 km2...... geophysical data set has been collected in this study consisting of: high-resolution seismic data, airborne transient electromagnetic data (SkyTEM), ground penetrating radar (GPR) and geoelectrical data. Furthermore, data from around 600 boreholes are available, but since the majority of these are very short...... the incorporating of a priori information resulted in significantly more realistic results. When planning geophysical mapping campaigns, the resolution capabilities of the methods are crucial for deciding which methods to use. In heterogeneous geological environments, dense data with a high degree of detail...

  3. Split Bregman method for large scale fused Lasso

    CERN Document Server

    Ye, Gui-Bo

    2010-01-01

    rdering of regression or classification coefficients occurs in many real-world applications. Fused Lasso exploits this ordering by explicitly regularizing the differences between neighboring coefficients through an $\\ell_1$ norm regularizer. However, due to nonseparability and nonsmoothness of the regularization term, solving the fused Lasso problem is computationally demanding. Existing solvers can only deal with problems of small or medium size, or a special case of the fused Lasso problem in which the predictor matrix is identity matrix. In this paper, we propose an iterative algorithm based on split Bregman method to solve a class of large-scale fused Lasso problems, including a generalized fused Lasso and a fused Lasso support vector classifier. We derive our algorithm using augmented Lagrangian method and prove its convergence properties. The performance of our method is tested on both artificial data and real-world applications including proteomic data from mass spectrometry and genomic data from array...

  4. Large-scale comparative visualisation of sets of multidimensional data

    CERN Document Server

    Vohl, Dany; Fluke, Christopher J; Poudel, Govinda; Georgiou-Karistianis, Nellie; Hassan, Amr H; Benovitski, Yuri; Wong, Tsz Ho; Kaluza, Owen; Nguyen, Toan D; Bonnington, C Paul

    2016-01-01

    We present encube $-$ a qualitative, quantitative and comparative visualisation and analysis system, with application to high-resolution, immersive three-dimensional environments and desktop displays. encube extends previous comparative visualisation systems by considering: 1) the integration of comparative visualisation and analysis into a unified system; 2) the documentation of the discovery process; and 3) an approach that enables scientists to continue the research process once back at their desktop. Our solution enables tablets, smartphones or laptops to be used as interaction units for manipulating, organising, and querying data. We highlight the modularity of encube, allowing additional functionalities to be included as required. Additionally, our approach supports a high level of collaboration within the physical environment. We show how our implementation of encube operates in a large-scale, hybrid visualisation and supercomputing environment using the CAVE2 at Monash University, and on a local deskt...

  5. Computational solutions to large-scale data management and analysis.

    Science.gov (United States)

    Schadt, Eric E; Linderman, Michael D; Sorenson, Jon; Lee, Lawrence; Nolan, Garry P

    2010-09-01

    Today we can generate hundreds of gigabases of DNA and RNA sequencing data in a week for less than US$5,000. The astonishing rate of data generation by these low-cost, high-throughput technologies in genomics is being matched by that of other technologies, such as real-time imaging and mass spectrometry-based flow cytometry. Success in the life sciences will depend on our ability to properly interpret the large-scale, high-dimensional data sets that are generated by these technologies, which in turn requires us to adopt advances in informatics. Here we discuss how we can master the different types of computational environments that exist - such as cloud and heterogeneous computing - to successfully tackle our big data problems.

  6. Modeling The Large Scale Bias of Neutral Hydrogen

    CERN Document Server

    Marin, Felipe; Seo, Hee-Jong; Vallinotto, Alberto

    2009-01-01

    We present analytical estimates of the large scale bias of neutral Hydrogen (HI) based on the Halo Occupation Distribution formalism. We use a simple, non-parametric model which monotonically relates the total mass of a halo with its HI mass at zero redshift; for earlier times we assume limiting models for the HI density parameter evolution, consistent with the data presently available, as well as two main scenarios for the evolution of our HI mass - Halo mass relation. We find that both the linear and the first non-linear bias terms exhibit a remarkable evolution with redshift, regardless of the specific limiting model assumed for the HI evolution. These analytical predictions are then shown to be consistent with measurements performed on the Millennium Simulation. Additionally, we show that this strong bias evolution does not sensibly affect the measurement of the HI Power Spectrum.

  7. Isolating relativistic effects in large-scale structure

    CERN Document Server

    Bonvin, Camille

    2014-01-01

    We present a fully relativistic calculation of the observed galaxy number counts in the linear regime. We show that besides the density fluctuations and redshift-space distortions, various relativistic effects contribute to observations at large scales. These effects all have the same physical origin: they result from the fact that our coordinate system, namely the galaxy redshift and the incoming photons' direction, is distorted by inhomogeneities in our universe. We then discuss the impact of the relativistic effects on the angular power spectrum and on the two-point correlation function in configuration space. We show that the latter is very well adapted to isolate the relativistic effects since it naturally makes use of the symmetries of the different contributions. In particular, we discuss how the Doppler effect and the gravitational redshift distortions can be isolated by looking for a dipole in the cross-correlation function between a bright and a faint population of galaxies.

  8. Segmentation by Large Scale Hypothesis Testing - Segmentation as Outlier Detection

    DEFF Research Database (Denmark)

    Darkner, Sune; Dahl, Anders Lindbjerg; Larsen, Rasmus;

    2010-01-01

    We propose a novel and efficient way of performing local image segmentation. For many applications a threshold of pixel intensities is sufficient but determine the appropriate threshold value can be difficult. In cases with large global intensity variation the threshold value has to be adapted...... locally. We propose a method based on large scale hypothesis testing with a consistent method for selecting an appropriate threshold for the given data. By estimating the background distribution we characterize the segment of interest as a set of outliers with a certain probability based on the estimated...... densities thus with what certainty the segmented object is not a part of the background. Because the method relies on local information it is very robust to changes in lighting conditions and shadowing effects. The method is applied to endoscopic images of small particles submerged in fluid captured through...

  9. High pressure sheet metal forming of large scale body structures

    Energy Technology Data Exchange (ETDEWEB)

    Trompeter, M.; Krux, R.; Homberg, W.; Kleiner, M. [Dortmund Univ. (Germany). Inst. of Forming Technology and Lightweight Construction

    2005-07-01

    An important trend in the automotive industry is the weight reduction of car bodies by lightweight construction. One approach to realise lightweight structures is the use of load optimised sheet metal parts (e.g. tailored blanks), especially for crash relevant car body structures. To form such parts which are mostly complex and primarily made of high strength steels, the use of working media based forming processes is favorable. The paper presents the manufacturing of a large scale structural component made of tailor rolled blanks (TRB) by high pressure sheet metal forming (HBU). The paper focuses mainly on the tooling system, which is integrated into a specific 100 MN hydroform press at the IUL. The HBU tool basically consists of a multipoint blankholder, a specially designed flange draw-in sensor, which is necessary to determine the material flow, and a sealing system. Furthermore, the paper presents a strategy for an effective closed loop flange draw-in control. (orig.)

  10. Hashkat: Large-scale simulations of online social networks

    CERN Document Server

    Ryczko, Kevin; Buhagiar, Nicholas; Tamblyn, Isaac

    2016-01-01

    Hashkat (http://hashkat.org) is a free, open source, agent based simulation software package designed to simulate large-scale online social networks (e.g. Twitter, Facebook, LinkedIn, etc). It allows for dynamic agent generation, edge creation, and information propagation. The purpose of hashkat is to study the growth of online social networks and how information flows within them. Like real life online social networks, hashkat incorporates user relationships, information diffusion, and trending topics. Hashkat was implemented in C++, and was designed with extensibility in mind. The software includes Shell and Python scripts for easy installation and usability. In this report, we describe all of the algorithms and features integrated into hashkat before moving on to example use cases. In general, hashkat can be used to understand the underlying topology of social networks, validate sampling methods of such networks, develop business strategy for advertising on online social networks, and test new features of ...

  11. Interloper bias in future large-scale structure surveys

    CERN Document Server

    Pullen, Anthony R; Dore, Olivier; Raccanelli, Alvise

    2015-01-01

    Next-generation spectroscopic surveys will map the large-scale structure of the observable universe, using emission line galaxies as tracers. While each survey will map the sky with a specific emission line, interloping emission lines can masquerade as the survey's intended emission line at different redshifts. Interloping lines from galaxies that are not removed can contaminate the power spectrum measurement, mixing correlations from various redshifts and diluting the true signal. We assess the potential for power spectrum contamination, finding that an interloper fraction worse than 0.2% could bias power spectrum measurements for future surveys by more than 10% of statistical errors, while also biasing inferences based on the power spectrum. We also construct a formalism for predicting biases for cosmological parameter measurements, and we demonstrate that a 0.3% interloper fraction could bias measurements of the growth rate by more than 10% of the error, which can affect constraints from upcoming surveys o...

  12. Mathematical programming methods for large-scale topology optimization problems

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana

    , and at the same time, reduce the number of function evaluations. Nonlinear optimization methods, such as sequential quadratic programming and interior point solvers, have almost not been embraced by the topology optimization community. Thus, this work is focused on the introduction of this kind of second...... for the classical minimum compliance problem. Two of the state-of-the-art optimization algorithms are investigated and implemented for this structural topology optimization problem. A Sequential Quadratic Programming (TopSQP) and an interior point method (TopIP) are developed exploiting the specific mathematical...... structure of the problem. In both solvers, information of the exact Hessian is considered. A robust iterative method is implemented to efficiently solve large-scale linear systems. Both TopSQP and TopIP have successful results in terms of convergence, number of iterations, and objective function values...

  13. On the Hyperbolicity of Large-Scale Networks

    CERN Document Server

    Kennedy, W Sean; Saniee, Iraj

    2013-01-01

    Through detailed analysis of scores of publicly available data sets corresponding to a wide range of large-scale networks, from communication and road networks to various forms of social networks, we explore a little-studied geometric characteristic of real-life networks, namely their hyperbolicity. In smooth geometry, hyperbolicity captures the notion of negative curvature; within the more abstract context of metric spaces, it can be generalized as d-hyperbolicity. This generalized definition can be applied to graphs, which we explore in this report. We provide strong evidence that communication and social networks exhibit this fundamental property, and through extensive computations we quantify the degree of hyperbolicity of each network in comparison to its diameter. By contrast, and as evidence of the validity of the methodology, applying the same methods to the road networks shows that they are not hyperbolic, which is as expected. Finally, we present practical computational means for detection of hyperb...

  14. The gamma ray background from large scale structure formation

    CERN Document Server

    Gabici, S; Gabici, Stefano; Blasi, Pasquale

    2003-01-01

    Hierarchical clustering of dark matter halos is thought to describe well the large scale structure of the universe. The baryonic component of the halos is shock heated to the virial temperature while a small fraction of the energy flux through the shocks may be energized through the first order Fermi process to relativistic energy per particle. It has been proposed that the electrons accelerated in this way may upscatter the photons of the universal microwave background to gamma ray energies and indeed generate a diffuse background of gamma rays that compares well to the observations. In this paper we calculate the spectra of the particles accelerated at the merger shocks and re-evaluate the contribution of structure formation to the extragalactic diffuse gamma ray background (EDGRB), concluding that this contribution adds up to at most 10% of the observed EDGRB.

  15. Visualizing large-scale uncertainty in astrophysical data.

    Science.gov (United States)

    Li, Hongwei; Fu, Chi-Wing; Li, Yinggang; Hanson, Andrew

    2007-01-01

    Visualization of uncertainty or error in astrophysical data is seldom available in simulations of astronomical phenomena, and yet almost all rendered attributes possess some degree of uncertainty due to observational error. Uncertainties associated with spatial location typically vary signicantly with scale and thus introduce further complexity in the interpretation of a given visualization. This paper introduces effective techniques for visualizing uncertainty in large-scale virtual astrophysical environments. Building upon our previous transparently scalable visualization architecture, we develop tools that enhance the perception and comprehension of uncertainty across wide scale ranges. Our methods include a unified color-coding scheme for representing log-scale distances and percentage errors, an ellipsoid model to represent positional uncertainty, an ellipsoid envelope model to expose trajectory uncertainty, and a magic-glass design supporting the selection of ranges of log-scale distance and uncertainty parameters, as well as an overview mode and a scalable WIM tool for exposing the magnitudes of spatial context and uncertainty.

  16. Optimization of large scale food production using Lean Manufacturing principles

    DEFF Research Database (Denmark)

    Engelund, Eva Høy; Friis, Alan; Breum, Gitte

    2009-01-01

    , the main purposes of implementing Lean were to rationalise internal procedures and to increase production efficiency following a change from cook-serve production to cook-chill, and a reduction in the number of employees. It was also important that product quality and working environment should...... not be negatively affected by the rationalisation of production procedures. The field study shows that Lean principles can be applied in meal production and can result in increased production efficiency and systematic improvement of product quality without negative effects on the working environment. The results......This paper discusses how the production principles of Lean Manufacturing (Lean) can be applied in a large-scale meal production. Lean principles are briefly presented, followed by a field study of how a kitchen at a Danish hospital has implemented Lean in the daily production. In the kitchen...

  17. A large-scale crop protection bioassay data set

    Science.gov (United States)

    Gaulton, Anna; Kale, Namrata; van Westen, Gerard J. P.; Bellis, Louisa J.; Bento, A. Patrícia; Davies, Mark; Hersey, Anne; Papadatos, George; Forster, Mark; Wege, Philip; Overington, John P.

    2015-07-01

    ChEMBL is a large-scale drug discovery database containing bioactivity information primarily extracted from scientific literature. Due to the medicinal chemistry focus of the journals from which data are extracted, the data are currently of most direct value in the field of human health research. However, many of the scientific use-cases for the current data set are equally applicable in other fields, such as crop protection research: for example, identification of chemical scaffolds active against a particular target or endpoint, the de-convolution of the potential targets of a phenotypic assay, or the potential targets/pathways for safety liabilities. In order to broaden the applicability of the ChEMBL database and allow more widespread use in crop protection research, an extensive data set of bioactivity data of insecticidal, fungicidal and herbicidal compounds and assays was collated and added to the database.

  18. Transition from large-scale to small-scale dynamo.

    Science.gov (United States)

    Ponty, Y; Plunian, F

    2011-04-15

    The dynamo equations are solved numerically with a helical forcing corresponding to the Roberts flow. In the fully turbulent regime the flow behaves as a Roberts flow on long time scales, plus turbulent fluctuations at short time scales. The dynamo onset is controlled by the long time scales of the flow, in agreement with the former Karlsruhe experimental results. The dynamo mechanism is governed by a generalized α effect, which includes both the usual α effect and turbulent diffusion, plus all higher order effects. Beyond the onset we find that this generalized α effect scales as O(Rm(-1)), suggesting the takeover of small-scale dynamo action. This is confirmed by simulations in which dynamo occurs even if the large-scale field is artificially suppressed.

  19. Automatic Installation and Configuration for Large Scale Farms

    CERN Document Server

    Novák, J

    2005-01-01

    Since the early appearance of commodity hardware, the utilization of computers rose rapidly, and they became essential in all areas of life. Soon it was realized that nodes are able to work cooperatively, in order to solve new, more complex tasks. This conception got materialized in coherent aggregations of computers called farms and clusters. Collective application of nodes, being efficient and economical, was adopted in education, research and industry before long. But maintainance, especially in large scale, appeared as a problem to be resolved. New challenges needed new methods and tools. Development work has been started to build farm management applications and frameworks. In the first part of the thesis, these systems are introduced. After a general description of the matter, a comparative analysis of different approaches and tools illustrates the practical aspects of the theoretical discussion. CERN, the European Organization of Nuclear Research is the largest Particle Physics laboratory in the world....

  20. The large-scale properties of simulated cosmic magnetic fields

    CERN Document Server

    Marinacci, Federico; Mocz, Philip; Pakmor, Ruediger

    2015-01-01

    We perform uniformly sampled large-scale cosmological simulations including magnetic fields with the moving mesh code AREPO. We run two sets of MHD simulations: one including adiabatic gas physics only; the other featuring the fiducial feedback model of the Illustris simulation. In the adiabatic case, the magnetic field amplification follows the $B \\propto \\rho^{2/3}$ scaling derived from `flux-freezing' arguments, with the seed field strength providing an overall normalisation factor. At high baryon overdensities the amplification is enhanced by shear flows and turbulence. Feedback physics and the inclusion of radiative cooling change this picture dramatically. Gas collapses to much larger densities and the magnetic field is amplified strongly, reaching saturation and losing memory of the initial seed field. At lower densities a dependence on the seed field strength and orientation, which in principle can be used to constrain models of cosmological magnetogenesis, is still present. Inside the most massive ha...

  1. Near optimal bispectrum estimators for large-scale structure

    CERN Document Server

    Schmittfull, Marcel; Seljak, Uroš

    2014-01-01

    Clustering of large-scale structure provides significant cosmological information through the power spectrum of density perturbations. Additional information can be gained from higher-order statistics like the bispectrum, especially to break the degeneracy between the linear halo bias $b_1$ and the amplitude of fluctuations $\\sigma_8$. We propose new simple, computationally inexpensive bispectrum statistics that are near optimal for the specific applications like bias determination. Corresponding to the Legendre decomposition of nonlinear halo bias and gravitational coupling at second order, these statistics are given by the cross-spectra of the density with three quadratic fields: the squared density, a tidal term, and a shift term. For halos and galaxies the first two have associated nonlinear bias terms $b_2$ and $b_{s^2}$, respectively, while the shift term has none in the absence of velocity bias (valid in the $k \\rightarrow 0$ limit). Thus the linear bias $b_1$ is best determined by the shift cross-spec...

  2. A large-scale crop protection bioassay data set.

    Science.gov (United States)

    Gaulton, Anna; Kale, Namrata; van Westen, Gerard J P; Bellis, Louisa J; Bento, A Patrícia; Davies, Mark; Hersey, Anne; Papadatos, George; Forster, Mark; Wege, Philip; Overington, John P

    2015-01-01

    ChEMBL is a large-scale drug discovery database containing bioactivity information primarily extracted from scientific literature. Due to the medicinal chemistry focus of the journals from which data are extracted, the data are currently of most direct value in the field of human health research. However, many of the scientific use-cases for the current data set are equally applicable in other fields, such as crop protection research: for example, identification of chemical scaffolds active against a particular target or endpoint, the de-convolution of the potential targets of a phenotypic assay, or the potential targets/pathways for safety liabilities. In order to broaden the applicability of the ChEMBL database and allow more widespread use in crop protection research, an extensive data set of bioactivity data of insecticidal, fungicidal and herbicidal compounds and assays was collated and added to the database.

  3. Recovery Act - Large Scale SWNT Purification and Solubilization

    Energy Technology Data Exchange (ETDEWEB)

    Michael Gemano; Dr. Linda B. McGown

    2010-10-07

    The goal of this Phase I project was to establish a quantitative foundation for development of binary G-gels for large-scale, commercial processing of SWNTs and to develop scientific insight into the underlying mechanisms of solubilization, selectivity and alignment. In order to accomplish this, we performed systematic studies to determine the effects of G-gel composition and experimental conditions that will enable us to achieve our goals that include (1) preparation of ultra-high purity SWNTs from low-quality, commercial SWNT starting materials, (2) separation of MWNTs from SWNTs, (3) bulk, non-destructive solubilization of individual SWNTs in aqueous solution at high concentrations (10-100 mg/mL) without sonication or centrifugation, (4) tunable enrichment of subpopulations of the SWNTs based on metallic vs. semiconductor properties, diameter, or chirality and (5) alignment of individual SWNTs.

  4. Large-scale BAO signatures of the smallest galaxies

    CERN Document Server

    Dalal, Neal; Seljak, Uros

    2010-01-01

    Recent work has shown that at high redshift, the relative velocity between dark matter and baryonic gas is typically supersonic. This relative velocity suppresses the formation of the earliest baryonic structures like minihalos, and the suppression is modulated on large scales. This effect imprints a characteristic shape in the clustering power spectrum of the earliest structures, with significant power on 100 Mpc scales featuring highly pronounced baryon acoustic oscillations. The amplitude of these oscillations is orders of magnitude larger at z=20 than previously expected. This characteristic signature can allow us to distinguish the effects of minihalos on intergalactic gas at times preceding and during reionization. We illustrate this effect with the example of 21 cm emission and absorption from redshifts during and before reionization. This effect can potentially allow us to probe physics on kpc scales using observations on 100 Mpc scales. We present sensitivity forecasts for FAST and Arecibo. Depending...

  5. Applications of large-scale density functional theory in biology

    Science.gov (United States)

    Cole, Daniel J.; Hine, Nicholas D. M.

    2016-10-01

    Density functional theory (DFT) has become a routine tool for the computation of electronic structure in the physics, materials and chemistry fields. Yet the application of traditional DFT to problems in the biological sciences is hindered, to a large extent, by the unfavourable scaling of the computational effort with system size. Here, we review some of the major software and functionality advances that enable insightful electronic structure calculations to be performed on systems comprising many thousands of atoms. We describe some of the early applications of large-scale DFT to the computation of the electronic properties and structure of biomolecules, as well as to paradigmatic problems in enzymology, metalloproteins, photosynthesis and computer-aided drug design. With this review, we hope to demonstrate that first principles modelling of biological structure-function relationships are approaching a reality.

  6. Large-scale Cosmic Flows from Cosmicflows-2 Catalog

    CERN Document Server

    Watkins, Richard

    2014-01-01

    We calculate the large-scale bulk flow from the Cosmicflows-2 peculiar velocity catalog (Tully et al. 2013) using the minimum variance method introduced in Watkins et al. (2009). We find a bulk flow of 262 +/- 60 km/sec on a scale of 100 Mpc/h, a result somewhat smaller than that found from the COMPOSITE catalog introduced in Watkins et al. (2009), a compendium of peculiar velocity data that has many objects in common with the Cosmicflows-2 sample. We find that distances are systematically larger in the Cosmicflows-2 catalog for objects in common due to a different approach to bias correction, and that this explains the difference in the bulk flows derived from the the two catalogs. The bulk flow result from the Cosmicflows-2 survey is consistent with expectations from LCDM, and thus this catalog potentially resolves an important challenge to the standard cosmological model.

  7. Theoretical expectations for bulk flows in large scale surveys

    CERN Document Server

    Feldman, H A; Hume A Feldman; Richard Watkins

    1993-01-01

    We calculate the theoretical expectation for the bulk motion of a large scale survey of the type recently carried out by Lauer and Postman. Included are the effects of survey geometry, errors in the distance measurements, clustering properties of the sample, and different assumed power spectra. We consider the power spectrum calculated from the IRAS-QDOT survey, as well as spectra from hot + cold and standard cold dark matter models. We find that sparse sampling and clustering can lead to an unexpectedly large bulk flow, even in a very deep survey. Our results suggest that the expected bulk motion is inconsistent with that reported by Lauer and Postman at the 90-94% confidence level.

  8. Innovation cycle for small- and large-scale change.

    Science.gov (United States)

    Scott, Kathy; Steinbinder, Amy

    2009-01-01

    In today's complex healthcare systems, transformation requires 2 major efforts: (1) a fundamental changes in the underlying beliefs and assumptions that perpetuate the current system and (2) a fundamental redesign of the multiplicity of diverse and complex subsystems that result in unpredictable aggregate behavior and outcomes. Through an Intelligent Complex Adaptive System framework combined with an innovation process a transformation process and cycle was created for a large healthcare system that resulted in both small- and large-scale changes. This process not only challenges the underlying beliefs and assumptions but also creates new possibilities and prototypes for care delivery through a change-management process that is inclusive and honors the contributions of the entire team.

  9. Battery technologies for large-scale stationary energy storage.

    Science.gov (United States)

    Soloveichik, Grigorii L

    2011-01-01

    In recent years, with the deployment of renewable energy sources, advances in electrified transportation, and development in smart grids, the markets for large-scale stationary energy storage have grown rapidly. Electrochemical energy storage methods are strong candidate solutions due to their high energy density, flexibility, and scalability. This review provides an overview of mature and emerging technologies for secondary and redox flow batteries. New developments in the chemistry of secondary and flow batteries as well as regenerative fuel cells are also considered. Advantages and disadvantages of current and prospective electrochemical energy storage options are discussed. The most promising technologies in the short term are high-temperature sodium batteries with β″-alumina electrolyte, lithium-ion batteries, and flow batteries. Regenerative fuel cells and lithium metal batteries with high energy density require further research to become practical.

  10. Observational signatures of modified gravity on ultra-large scales

    CERN Document Server

    Baker, Tessa

    2015-01-01

    Extremely large surveys with future experiments like Euclid and the SKA will soon allow us to access perturbation modes close to the Hubble scale, with wavenumbers $k \\sim {\\cal H}$. If a modified gravity theory is responsible for cosmic acceleration, the Hubble scale is a natural regime for deviations from General Relativity (GR) to become manifest. The majority of studies to date have concentrated on the consequences of alternative gravity theories for the subhorizon, quasi-static regime, however. We investigate how modifications to the gravitational field equations affect perturbations around the Hubble scale, and how this translates into deviations of ultra large-scale relativistic observables from their GR behaviour. Adopting a model-independent ethos that relies only on the broad physical properties of gravity theories, we find that the deviations of the observables are small unless modifications to GR are drastic. The angular dependence and redshift evolution of the deviations is highly parameterisatio...

  11. Large scale instabilities in two-dimensional magnetohydrodynamics

    Science.gov (United States)

    Boffetta; Celani; Prandi

    2000-04-01

    The stability of a sheared magnetic field is analyzed in two-dimensional magnetohydrodynamics with resistive and viscous dissipation. Using a multiple-scale analysis, it is shown that at large enough Reynolds numbers the basic state describing a motionless fluid and a layered magnetic field, becomes unstable with respect to large scale perturbations. The exact expressions for eddy-viscosity and eddy-resistivity are derived in the nearby of the critical point where the instability sets in. In this marginally unstable case the nonlinear phase of perturbation growth obeys to a Cahn-Hilliard-like dynamics characterized by coalescence of magnetic islands leading to a final new equilibrium state. High resolution numerical simulations confirm quantitatively the predictions of multiscale analysis.

  12. Large-scale structure non-Gaussianities with modal methods

    Science.gov (United States)

    Schmittfull, Marcel

    2016-10-01

    Relying on a separable modal expansion of the bispectrum, the implementation of a fast estimator for the full bispectrum of a 3d particle distribution is presented. The computational cost of accurate bispectrum estimation is negligible relative to simulation evolution, so the bispectrum can be used as a standard diagnostic whenever the power spectrum is evaluated. As an application, the time evolution of gravitational and primordial dark matter bispectra was measured in a large suite of N-body simulations. The bispectrum shape changes characteristically when the cosmic web becomes dominated by filaments and halos, therefore providing a quantitative probe of 3d structure formation. Our measured bispectra are determined by ~ 50 coefficients, which can be used as fitting formulae in the nonlinear regime and for non-Gaussian initial conditions. We also compare the measured bispectra with predictions from the Effective Field Theory of Large Scale Structures (EFTofLSS).

  13. Including investment risk in large-scale power market models

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard; Meibom, P.

    2003-01-01

    the analyses quantitatively, a framework based on an iterative interaction between the equilibrium model and a separate risk-adjustment module was constructed. To illustrate the features of the proposed modelling approach we examined how uncertainty in demand and variable costs affects the optimal choice......Long-term energy market models can be used to examine investments in production technologies, however, with market liberalisation it is crucial that such models include investment risks and investor behaviour. This paper analyses how the effect of investment risk on production technology selection...... can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate...

  14. Participatory Design and the Challenges of Large-Scale Systems

    DEFF Research Database (Denmark)

    Simonsen, Jesper; Hertzum, Morten

    2008-01-01

    With its 10th biannual anniversary conference, Participatory Design (PD) is leaving its teens and must now be considered ready to join the adult world. In this article we encourage the PD community to think big: PD should engage in large-scale information-systems development and opt for a PD...... approach applied throughout design and organizational implementation. To pursue this aim we extend the iterative PD prototyping approach by (1) emphasizing PD experiments as transcending traditional prototyping by evaluating fully integrated systems exposed to real work practices; (2) incorporating...... improvisational change management including anticipated, emergent, and opportunity-based change; and (3) extending initial design and development into a sustained and ongoing stepwise implementation that constitutes an overall technology-driven organizational change. The extended approach is exemplified through...

  15. Towards online multiresolution community detection in large-scale networks.

    Directory of Open Access Journals (Sweden)

    Jianbin Huang

    Full Text Available The investigation of community structure in networks has aroused great interest in multiple disciplines. One of the challenges is to find local communities from a starting vertex in a network without global information about the entire network. Many existing methods tend to be accurate depending on a priori assumptions of network properties and predefined parameters. In this paper, we introduce a new quality function of local community and present a fast local expansion algorithm for uncovering communities in large-scale networks. The proposed algorithm can detect multiresolution community from a source vertex or communities covering the whole network. Experimental results show that the proposed algorithm is efficient and well-behaved in both real-world and synthetic networks.

  16. Theoretical expectations for bulk flows in large-scale surveys

    Science.gov (United States)

    Feldman, Hume A.; Watkins, Richard

    1994-01-01

    We calculate the theoretical expectation for the bulk motion of a large-scale survey of the type recently carried out by Lauer and Postman. Included are the effects of survey geometry, errors in the distance measurements, clustering properties of the sample, and different assumed power spectra. We considered the power spectrum calculated from the Infrared Astronomy Satellite (IRAS)-QDOT survey, as well as spectra from hot + cold and standard cold dark matter models. We find that measurement uncertainty, sparse sampling, and clustering can lead to a much larger expectation for the bulk motion of a cluster sample than for the volume as a whole. However, our results suggest that the expected bulk motion is still inconsistent with that reported by Lauer and Postman at the 95%-97% confidence level.

  17. The XMM/Megacam-VST/VIRMOS Large Scale Structure Survey

    CERN Document Server

    Pierre, M

    2000-01-01

    The objective of the XMM-LSS Survey is to map the large scale structure of the universe, as highlighted by clusters and groups of galaxies, out to a redshift of about 1, over a single 8x8 sq.deg. area. For the first time, this will reveal the topology of the distribution of the deep potential wells and provide statistical measurements at truly cosmological distances. In addition, clusters identified via their X-ray properties will form the basis for the first uniformly-selected, multi-wavelength survey of the evolution of clusters and individual cluster galaxies as a function of redshift. The survey will also address the very important question of the QSO distribution within the cosmic web.

  18. Chirping for large-scale maritime archaeological survey

    DEFF Research Database (Denmark)

    Grøn, Ole; Boldreel, Lars Ole

    2014-01-01

    Archaeological wrecks exposed on the sea floor are mapped using side-scan and multibeam techniques, whereas the detection of submerged archaeological sites, such as Stone Age settlements, and wrecks, partially or wholly embedded in sea-floor sediments, requires the application of high...... those employed in several detailed studies of known wreck sites and from the way in which geologists map the sea floor and the geological column beneath it. The strategy has been developed on the basis of extensive practical experience gained during the use of an off-the-shelf 2D chirp system and, given......-resolution subbottom profilers. This paper presents a strategy for cost-effective, large-scale mapping of previously undetected sediment-embedded sites and wrecks based on subbottom profiling with chirp systems. The mapping strategy described includes (a) definition of line spacing depending on the target; (b...

  19. How Large-Scale Research Facilities Connect to Global Research

    DEFF Research Database (Denmark)

    Lauto, Giancarlo; Valentin, Finn

    2013-01-01

    research. However, based on data on publications produced in 2006–2009 at the Neutron Science Directorate of Oak Ridge National Laboratory in Tennessee (United States), we find that internationalization of its collaborative research is restrained by coordination costs similar to those characterizing other......Policies for large-scale research facilities (LSRFs) often highlight their spillovers to industrial innovation and their contribution to the external connectivity of the regional innovation system hosting them. Arguably, the particular institutional features of LSRFs are conducive for collaborative...... institutional settings. Policies mandating LSRFs should consider that research prioritized on the basis of technological relevance limits the international reach of collaborations. Additionally, the propensity for international collaboration is lower for resident scientists than for those affiliated...

  20. Possible implications of large scale radiation processing of food

    Science.gov (United States)

    Zagórski, Z. P.

    Large scale irradiation has been discussed in terms of the participation of processing cost in the final value of the improved product. Another factor has been taken into account and that is the saturation of the market with the new product. In the case of succesful projects the participation of irradiation cost is low, and the demand for the better product is covered. A limited availability of sources makes the modest saturation of the market difficult with all food subjected to correct radiation treatment. The implementation of the preservation of food needs a decided selection of these kinds of food which comply to all conditions i.e. of acceptance by regulatory bodies, real improvement of quality and economy. The last condition prefers the possibility of use of electron beams of low energy. The best fullfilment of conditions for succesful processing is observed in the group of dry food, in expensive spices in particular.

  1. Large scale anisotropy studies with the Pierre Auger Observatory

    Energy Technology Data Exchange (ETDEWEB)

    Bonino, R., E-mail: rbonino@to.infn.it [Istituto Nazionale di Astrofisica - IFSI, c.so Fiume 4, 10133 Torino (Italy); INFN sezione di Torino, v. P. Giuria 1, 10125 Torino (Italy)

    2012-11-11

    Completed at the end of 2008, the Pierre Auger Observatory has been continuously operating for more than seven years. We present here the analysis techniques and the results about the search for large scale anisotropies in the sky distribution of cosmic rays, reporting both the phase and the amplitude measurements of the first harmonic modulation in right ascension in different energy ranges above 2.5 Multiplication-Sign 10{sup 17} eV. Thanks to the collected statistics, a sensitivity of 1% at EeV energies can be reached. No significant anisotropies have been observed, upper limits on the amplitudes have been derived and are here compared with the results of previous experiments and with some theoretical expectations.

  2. Large scale anisotropy studies with the Pierre Auger Observatory

    Science.gov (United States)

    Bonino, R.

    2012-11-01

    Completed at the end of 2008, the Pierre Auger Observatory has been continuously operating for more than seven years. We present here the analysis techniques and the results about the search for large scale anisotropies in the sky distribution of cosmic rays, reporting both the phase and the amplitude measurements of the first harmonic modulation in right ascension in different energy ranges above 2.5×1017 eV. Thanks to the collected statistics, a sensitivity of 1% at EeV energies can be reached. No significant anisotropies have been observed, upper limits on the amplitudes have been derived and are here compared with the results of previous experiments and with some theoretical expectations.

  3. SOLVING TRUST REGION PROBLEM IN LARGE SCALE OPTIMIZATION

    Institute of Scientific and Technical Information of China (English)

    Bing-sheng He

    2000-01-01

    This paper presents a new method for solving the basic problem in the “model trust region” approach to large scale minimization: Compute a vector x such that 1/2xTHx + cTx = min, subject to the constraint ‖x‖2≤a. The method is a combination of the CG method and a projection and contraction (PC) method. The first (CG) method with x0 = 0 as the start point either directly offers a solution of the problem, or--as soon as the norm of the iterate greater than a, --it gives a suitable starting point and a favourable choice of a crucial scaling parameter in the second (PC) method. Some numerical examples are given, which indicate that the method is applicable.

  4. Matrix Sampling of Items in Large-Scale Assessments

    Directory of Open Access Journals (Sweden)

    Ruth A. Childs

    2003-07-01

    Full Text Available Matrix sampling of items -' that is, division of a set of items into different versions of a test form..-' is used by several large-scale testing programs. Like other test designs, matrixed designs have..both advantages and disadvantages. For example, testing time per student is less than if each..student received all the items, but the comparability of student scores may decrease. Also,..curriculum coverage is maintained, but reporting of scores becomes more complex. In this paper,..matrixed designs are compared with more traditional designs in nine categories of costs:..development costs, materials costs, administration costs, educational costs, scoring costs,..reliability costs, comparability costs, validity costs, and reporting costs. In choosing among test..designs, a testing program should examine the costs in light of its mandate(s, the content of the..tests, and the financial resources available, among other considerations.

  5. Mass Efficiencies for Common Large-Scale Precision Space Structures

    Science.gov (United States)

    Williams, R. Brett; Agnes, Gregory S.

    2005-01-01

    This paper presents a mass-based trade study for large-scale deployable triangular trusses, where the longerons can be monocoque tubes, isogrid tubes, or coilable longeron trusses. Such structures are typically used to support heavy reflectors, solar panels, or other instruments, and are subject to thermal gradients that can vary a great deal based on orbital altitude, location in orbit, and self-shadowing. While multi layer insulation (MLI) blankets are commonly used to minimize the magnitude of these thermal disturbances, they subject the truss to a nonstructural mass penalty. This paper investigates the impact of these add-on thermal protection layers on selecting the lightest precision structure for a given loading scenario.

  6. Large scale simulations of the great 1906 San Francisco earthquake

    Science.gov (United States)

    Nilsson, S.; Petersson, A.; Rodgers, A.; Sjogreen, B.; McCandless, K.

    2006-12-01

    As part of a multi-institutional simulation effort, we present large scale computations of the ground motion during the great 1906 San Francisco earthquake using a new finite difference code called WPP. The material data base for northern California provided by USGS together with the rupture model by Song et al. is demonstrated to lead to a reasonable match with historical data. In our simulations, the computational domain covered 550 km by 250 km of northern California down to 40 km depth, so a 125 m grid size corresponds to about 2.2 Billion grid points. To accommodate these large grids, the simulations were run on 512-1024 processors on one of the supercomputers at Lawrence Livermore National Lab. A wavelet compression algorithm enabled storage of time-dependent volumetric data. Nevertheless, the first 45 seconds of the earthquake still generated 1.2 TByte of disk space and the 3-D post processing was done in parallel.

  7. Magnetic fields and the large-scale structure

    CERN Document Server

    Battaner, E

    1999-01-01

    The large-scale structure of the Universe has been observed to be characterized by long filaments, forming polyhedra, with a remarkable 100-200 Mpc periodicity, suggesting a regular network. The introduction of magnetic fields into the physics of the evolution of structure formation provides some clues to understanding this unexpected lattice structure. A relativistic treatment of the evolution of pre-recombination inhomogeneities, including magnetic fields, is presented to show that equivalent-to-present field strengths of the order of $10^{-8}$ G could have played an important role. Primordial magnetic tubes generated at inflation, at scales larger than the horizon before recombination, could have produced filamentary density structures, with comoving lengths larger than about 10 Mpc. Structures shorter than this would have been destroyed by diffusion due to the small pre-recombination conductivity. If filaments constitute a lattice, the primordial magnetic field structures that produced the post-recombinat...

  8. Supersymmetry and Large Scale Left-Right Symmetry

    CERN Document Server

    Aulakh, Charanjit S; Rasin, A; Senjanovic, G; Aulakh, Charanjit S.; Melfo, Alejandra; Rasin, Andrija; Senjanovic, Goran

    1998-01-01

    We present a systematic study of the construction of large scale supersymmetric left-right theories, by utilizing holomorphic invariants to characterize flat directions, both at the renormalizable and the non-renormalizable level. We show that the low energy limit of the minimal supersymmetric Left-Right models is the supersymmetric standard model with an exact R-parity. Whereas in the renormalizable version the scale of parity breaking is undetermined, in the non-renormalizable one it must be bigger than about $10^{10} - 10^{12}$ GeV. The precise nature of the see-saw mechanism differs in the two versions, and we discuss it at length. In both versions of the theory a number of Higgs scalars and fermions with masses much below the $B-L$ and $SU(2)_R$ breaking scales is predicted. For a reasonable choice of parameters, either charged or doubly-charged such particles may be accesible to experiment.

  9. Large scale protein separations: engineering aspects of chromatography.

    Science.gov (United States)

    Chisti, Y; Moo-Young, M

    1990-01-01

    The engineering considerations common to large scale chromatographic purification of proteins are reviewed. A discussion of the industrial chromatography fundamentals is followed by aspects which affect the scale of separation. The separation column geometry, the effect of the main operational parameters on separation performance, and the physical characteristics of column packing are treated. Throughout, the emphasis is on ion exchange and size exclusion techniques which together constitute the major portion of commercial chromatographic protein purifications. In all cases, the state of current technology is examined and areas in need of further development are noted. The physico-chemical advances now underway in chromatographic separation of biopolymers would ensure a substantially enhanced role for these techniques in industrial production of products of new biotechnology.

  10. U-shaped Vortex Structures in Large Scale Cloud Cavitation

    Science.gov (United States)

    Cao, Yantao; Peng, Xiaoxing; Xu, Lianghao; Hong, Fangwen

    2015-12-01

    The control of cloud cavitation, especially large scale cloud cavitation(LSCC), is always a hot issue in the field of cavitation research. However, there has been little knowledge on the evolution of cloud cavitation since it is associated with turbulence and vortex flow. In this article, the structure of cloud cavitation shed by sheet cavitation around different hydrofoils and a wedge were observed in detail with high speed camera (HSC). It was found that the U-shaped vortex structures always existed in the development process of LSCC. The results indicated that LSCC evolution was related to this kind of vortex structures, and it may be a universal character for LSCC. Then vortex strength of U-shaped vortex structures in a cycle was analyzed with numerical results.

  11. Performance Health Monitoring of Large-Scale Systems

    Energy Technology Data Exchange (ETDEWEB)

    Rajamony, Ram [IBM Research, Austin, TX (United States)

    2014-11-20

    This report details the progress made on the ASCR funded project Performance Health Monitoring for Large Scale Systems. A large-­scale application may not achieve its full performance potential due to degraded performance of even a single subsystem. Detecting performance faults, isolating them, and taking remedial action is critical for the scale of systems on the horizon. PHM aims to develop techniques and tools that can be used to identify and mitigate such performance problems. We accomplish this through two main aspects. The PHM framework encompasses diagnostics, system monitoring, fault isolation, and performance evaluation capabilities that indicates when a performance fault has been detected, either due to an anomaly present in the system itself or due to contention for shared resources between concurrently executing jobs. Software components called the PHM Control system then build upon the capabilities provided by the PHM framework to mitigate degradation caused by performance problems.

  12. Statistics of Caustics in Large-Scale Structure Formation

    Science.gov (United States)

    Feldbrugge, Job L.; Hidding, Johan; van de Weygaert, Rien

    2016-10-01

    The cosmic web is a complex spatial pattern of walls, filaments, cluster nodes and underdense void regions. It emerged through gravitational amplification from the Gaussian primordial density field. Here we infer analytical expressions for the spatial statistics of caustics in the evolving large-scale mass distribution. In our analysis, following the quasi-linear Zel'dovich formalism and confined to the 1D and 2D situation, we compute number density and correlation properties of caustics in cosmic density fields that evolve from Gaussian primordial conditions. The analysis can be straightforwardly extended to the 3D situation. We moreover, are currently extending the approach to the non-linear regime of structure formation by including higher order Lagrangian approximations and Lagrangian effective field theory.

  13. A mini review: photobioreactors for large scale algal cultivation.

    Science.gov (United States)

    Gupta, Prabuddha L; Lee, Seung-Mok; Choi, Hee-Jeong

    2015-09-01

    Microalgae cultivation has gained much interest in terms of the production of foods, biofuels, and bioactive compounds and offers a great potential option for cleaning the environment through CO2 sequestration and wastewater treatment. Although open pond cultivation is most affordable option, there tends to be insufficient control on growth conditions and the risk of contamination. In contrast, while providing minimal risk of contamination, closed photobioreactors offer better control on culture conditions, such as: CO2 supply, water supply, optimal temperatures, efficient exposure to light, culture density, pH levels, and mixing rates. For a large scale production of biomass, efficient photobioreactors are required. This review paper describes general design considerations pertaining to photobioreactor systems, in order to cultivate microalgae for biomass production. It also discusses the current challenges in designing of photobioreactors for the production of low-cost biomass.

  14. Optimal Wind Energy Integration in Large-Scale Electric Grids

    Science.gov (United States)

    Albaijat, Mohammad H.

    The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create

  15. Large Scale Bacterial Colony Screening of Diversified FRET Biosensors.

    Directory of Open Access Journals (Sweden)

    Julia Litzlbauer

    Full Text Available Biosensors based on Förster Resonance Energy Transfer (FRET between fluorescent protein mutants have started to revolutionize physiology and biochemistry. However, many types of FRET biosensors show relatively small FRET changes, making measurements with these probes challenging when used under sub-optimal experimental conditions. Thus, a major effort in the field currently lies in designing new optimization strategies for these types of sensors. Here we describe procedures for optimizing FRET changes by large scale screening of mutant biosensor libraries in bacterial colonies. We describe optimization of biosensor expression, permeabilization of bacteria, software tools for analysis, and screening conditions. The procedures reported here may help in improving FRET changes in multiple suitable classes of biosensors.

  16. Spatial solitons in photonic lattices with large-scale defects

    Institute of Scientific and Technical Information of China (English)

    Yang Xiao-Yu; Zheng Jiang-Bo; Dong Liang-Wei

    2011-01-01

    We address the existence, stability and propagation dynamics of solitons supported by large-scale defects surrounded by the harmonic photonic lattices imprinted in the defocusing saturable nonlinear medium. Several families of soliton solutions, including flat-topped, dipole-like, and multipole-like solitons, can be supported by the defected lattices with different heights of defects. The width of existence domain of solitons is determined solely by the saturable parameter. The existence domains of various types of solitons can be shifted by the variations of defect size, lattice depth and soliton order. Solitons in the model are stable in a wide parameter window, provided that the propagation constant exceeds a critical value, which is in sharp contrast to the case where the soliton trains is supported by periodic lattices imprinted in defocusing saturable nonlinear medium. We also find stable solitons in the semi-infinite gap which rarely occur in the defocusing media.

  17. Order reduction of large-scale linear oscillatory system models

    Energy Technology Data Exchange (ETDEWEB)

    Trudnowksi, D.J. (Pacific Northwest Lab., Richland, WA (United States))

    1994-02-01

    Eigen analysis and signal analysis techniques of deriving representations of power system oscillatory dynamics result in very high-order linear models. In order to apply many modern control design methods, the models must be reduced to a more manageable order while preserving essential characteristics. Presented in this paper is a model reduction method well suited for large-scale power systems. The method searches for the optimal subset of the high-order model that best represents the system. An Akaike information criterion is used to define the optimal reduced model. The method is first presented, and then examples of applying it to Prony analysis and eigenanalysis models of power systems are given.

  18. The dynamics of large-scale arrays of coupled resonators

    Science.gov (United States)

    Borra, Chaitanya; Pyles, Conor S.; Wetherton, Blake A.; Quinn, D. Dane; Rhoads, Jeffrey F.

    2017-03-01

    This work describes an analytical framework suitable for the analysis of large-scale arrays of coupled resonators, including those which feature amplitude and phase dynamics, inherent element-level parameter variation, nonlinearity, and/or noise. In particular, this analysis allows for the consideration of coupled systems in which the number of individual resonators is large, extending as far as the continuum limit corresponding to an infinite number of resonators. Moreover, this framework permits analytical predictions for the amplitude and phase dynamics of such systems. The utility of this analytical methodology is explored through the analysis of a system of N non-identical resonators with global coupling, including both reactive and dissipative components, physically motivated by an electromagnetically-transduced microresonator array. In addition to the amplitude and phase dynamics, the behavior of the system as the number of resonators varies is investigated and the convergence of the discrete system to the infinite-N limit is characterized.

  19. Planning under uncertainty solving large-scale stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G. (Stanford Univ., CA (United States). Dept. of Operations Research Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft)

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  20. Synchronization control for large-scale network systems

    CERN Document Server

    Wu, Yuanqing; Su, Hongye; Shi, Peng; Wu, Zheng-Guang

    2017-01-01

    This book provides recent advances in analysis and synthesis of Large-scale network systems (LSNSs) with sampled-data communication and non-identical nodes. In its first chapter of the book presents an introduction to Synchronization of LSNSs and Algebraic Graph Theory as well as an overview of recent developments of LSNSs with sampled data control or output regulation control. The main text of the book is organized into two main parts - Part I: LSNSs with sampled-data communication and Part II: LSNSs with non-identical nodes. This monograph provides up-to-date advances and some recent developments in the analysis and synthesis issues for LSNSs with sampled-data communication and non-identical nodes. It describes the constructions of the adaptive reference generators in the first stage and the robust regulators in the second stage. Examples are presented to show the effectiveness of the proposed design techniques.

  1. Planck intermediate results. XLII. Large-scale Galactic magnetic fields

    CERN Document Server

    Adam, R; Alves, M I R; Ashdown, M; Aumont, J; Baccigalupi, C; Banday, A J; Barreiro, R B; Bartolo, N; Battaner, E; Benabed, K; Benoit-Lévy, A; Bernard, J -P; Bersanelli, M; Bielewicz, P; Bonavera, L; Bond, J R; Borrill, J; Bouchet, F R; Boulanger, F; Bucher, M; Burigana, C; Butler, R C; Calabrese, E; Cardoso, J -F; Catalano, A; Chiang, H C; Christensen, P R; Colombo, L P L; Combet, C; Couchot, F; Crill, B P; Curto, A; Cuttaia, F; Danese, L; Davis, R J; de Bernardis, P; de Rosa, A; de Zotti, G; Delabrouille, J; Dickinson, C; Diego, J M; Dolag, K; Doré, O; Ducout, A; Dupac, X; Elsner, F; Enßlin, T A; Eriksen, H K; Ferrière, K; Finelli, F; Forni, O; Frailis, M; Fraisse, A A; Franceschi, E; Galeotta, S; Ganga, K; Ghosh, T; Giard, M; Gjerløw, E; González-Nuevo, J; Górski, K M; Gregorio, A; Gruppuso, A; Gudmundsson, J E; Hansen, F K; Harrison, D L; Hernández-Monteagudo, C; Herranz, D; Hildebrandt, S R; Hobson, M; Hornstrup, A; Hurier, G; Jaffe, A H; Jaffe, T R; Jones, W C; Juvela, M; Keihänen, E; Keskitalo, R; Kisner, T S; Knoche, J; Kunz, M; Kurki-Suonio, H; Lamarre, J -M; Lasenby, A; Lattanzi, M; Lawrence, C R; Leahy, J P; Leonardi, R; Levrier, F; Lilje, P B; Linden-Vørnle, M; López-Caniego, M; Lubin, P M; Macías-Pérez, J F; Maggio, G; Maino, D; Mandolesi, N; Mangilli, A; Maris, M; Martin, P G; Masi, S; Melchiorri, A; Mennella, A; Migliaccio, M; Miville-Deschênes, M -A; Moneti, A; Montier, L; Morgante, G; Munshi, D; Murphy, J A; Naselsky, P; Nati, F; Natoli, P; Nørgaard-Nielsen, H U; Oppermann, N; Orlando, E; Pagano, L; Pajot, F; Paladini, R; Paoletti, D; Pasian, F; Perotto, L; Pettorino, V; Piacentini, F; Piat, M; Pierpaoli, E; Plaszczynski, S; Pointecouteau, E; Polenta, G; Ponthieu, N; Pratt, G W; Prunet, S; Puget, J -L; Rachen, J P; Reinecke, M; Remazeilles, M; Renault, C; Renzi, A; Ristorcelli, I; Rocha, G; Rossetti, M; Roudier, G; Rubiño-Martín, J A; Rusholme, B; Sandri, M; Santos, D; Savelainen, M; Scott, D; Spencer, L D; Stolyarov, V; Stompor, R; Strong, A W; Sudiwala, R; Sunyaev, R; Suur-Uski, A -S; Sygnet, J -F; Tauber, J A; Terenzi, L; Toffolatti, L; Tomasi, M; Tristram, M; Tucci, M; Valenziano, L; Valiviita, J; Van Tent, B; Vielva, P; Villa, F; Wade, L A; Wandelt, B D; Wehus, I K; Yvon, D; Zacchei, A; Zonca, A

    2016-01-01

    Recent models for the large-scale Galactic magnetic fields in the literature were largely constrained by synchrotron emission and Faraday rotation measures. We select three different but representative models and compare their predicted polarized synchrotron and dust emission with that measured by the Planck satellite. We first update these models to match the Planck synchrotron products using a common model for the cosmic-ray leptons. We discuss the impact on this analysis of the ongoing problems of component separation in the Planck microwave bands and of the uncertain cosmic-ray spectrum. In particular, the inferred degree of ordering in the magnetic fields is sensitive to these systematic uncertainties. We then compare the resulting simulated emission to the observed dust emission and find that the dust predictions do not match the morphology in the Planck data, particularly the vertical profile in latitude. We show how the dust data can then be used to further improve these magnetic field models, particu...

  2. Large-Scale Quantitative Analysis of Painting Arts

    Science.gov (United States)

    Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong

    2014-12-01

    Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images - the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances.

  3. An Extensible Timing Infrastructure for Adaptive Large-scale Applications

    CERN Document Server

    Stark, Dylan; Goodale, Tom; Radke, Thomas; Schnetter, Erik

    2007-01-01

    Real-time access to accurate and reliable timing information is necessary to profile scientific applications, and crucial as simulations become increasingly complex, adaptive, and large-scale. The Cactus Framework provides flexible and extensible capabilities for timing information through a well designed infrastructure and timing API. Applications built with Cactus automatically gain access to built-in timers, such as gettimeofday and getrusage, system-specific hardware clocks, and high-level interfaces such as PAPI. We describe the Cactus timer interface, its motivation, and its implementation. We then demonstrate how this timing information can be used by an example scientific application to profile itself, and to dynamically adapt itself to a changing environment at run time.

  4. Large-scale quantum networks based on graphs

    Science.gov (United States)

    Epping, Michael; Kampermann, Hermann; Bruß, Dagmar

    2016-05-01

    Society relies and depends increasingly on information exchange and communication. In the quantum world, security and privacy is a built-in feature for information processing. The essential ingredient for exploiting these quantum advantages is the resource of entanglement, which can be shared between two or more parties. The distribution of entanglement over large distances constitutes a key challenge for current research and development. Due to losses of the transmitted quantum particles, which typically scale exponentially with the distance, intermediate quantum repeater stations are needed. Here we show how to generalise the quantum repeater concept to the multipartite case, by describing large-scale quantum networks, i.e. network nodes and their long-distance links, consistently in the language of graphs and graph states. This unifying approach comprises both the distribution of multipartite entanglement across the network, and the protection against errors via encoding. The correspondence to graph states also provides a tool for optimising the architecture of quantum networks.

  5. Striping and Scheduling for Large Scale Multimedia Servers

    Institute of Scientific and Technical Information of China (English)

    Kyung-Oh Lee; Jun-Ho Park; Yoon-Young Park

    2004-01-01

    When designing a multimedia server, several things must be decided: which scheduling scheme to adopt, how to allocate multimedia objects on storage devices, and the round length with which the streams will be serviced. Several problems in the designing of large-scale multimedia servers are addressed, with the following contributions: (1) a striping scheme is proposed that minimizes the number of seeks and hence maximizes the performance; (2) a simple and efficient mechanism is presented to find the optimal striping unit size as well as the optimal round length, which exploits both the characteristics of VBR streams and the situation of resources in the system; and (3) the characteristics and resource requirements of several scheduling schemes are investigated in order to obtain a clear indication as to which scheme shows the best performance in realtime multimedia servicing. Based on our analysis and experimental results, the CSCAN scheme outperforms the other schemes.

  6. Communities, modules and large-scale structure in networks

    Science.gov (United States)

    Newman, M. E. J.

    2012-01-01

    Networks, also called graphs by mathematicians, provide a useful abstraction of the structure of many complex systems, ranging from social systems and computer networks to biological networks and the state spaces of physical systems. In the past decade there have been significant advances in experiments to determine the topological structure of networked systems, but there remain substantial challenges in extracting scientific understanding from the large quantities of data produced by the experiments. A variety of basic measures and metrics are available that can tell us about small-scale structure in networks, such as correlations, connections and recurrent patterns, but it is considerably more difficult to quantify structure on medium and large scales, to understand the `big picture'. Important progress has been made, however, within the past few years, a selection of which is reviewed here.

  7. Deep Feature Learning and Cascaded Classifier for Large Scale Data

    DEFF Research Database (Denmark)

    Prasoon, Adhish

    This thesis focuses on voxel/pixel classification based approaches for image segmentation. The main application is segmentation of articular cartilage in knee MRIs. The first major contribution of the thesis deals with large scale machine learning problems. Many medical imaging problems need huge...... to a state-of-the-art method for cartilage segmentation using one stage nearest neighbour classifier. Our method achieved better results than the state-of-the-art method for tibial as well as femoral cartilage segmentation. The next main contribution of the thesis deals with learning features autonomously...... image, respectively and this system is referred as triplanar convolutional neural network in the thesis. We applied the triplanar CNN for segmenting articular cartilage in knee MRI and compared its performance with the same state-of-the-art method which was used as a benchmark for cascaded classifier...

  8. Structural Quality of Service in Large-Scale Networks

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup

    Digitalization has created the base for co-existence and convergence in communications, leading to an increasing use of multi service networks. This is for example seen in the Fiber To The Home implementations, where a single fiber is used for virtually all means of communication, including TV......, telephony and data. To meet the requirements of the different applications, and to handle the increased vulnerability to failures, the ability to design robust networks providing good Quality of Service is crucial. However, most planning of large-scale networks today is ad-hoc based, leading to highly...... complex networks lacking predictability and global structural properties. The thesis applies the concept of Structural Quality of Service to formulate desirable global properties, and it shows how regular graph structures can be used to obtain such properties....

  9. Large-scale structure of time evolving citation networks

    Science.gov (United States)

    Leicht, E. A.; Clarkson, G.; Shedden, K.; Newman, M. E. J.

    2007-09-01

    In this paper we examine a number of methods for probing and understanding the large-scale structure of networks that evolve over time. We focus in particular on citation networks, networks of references between documents such as papers, patents, or court cases. We describe three different methods of analysis, one based on an expectation-maximization algorithm, one based on modularity optimization, and one based on eigenvector centrality. Using the network of citations between opinions of the United States Supreme Court as an example, we demonstrate how each of these methods can reveal significant structural divisions in the network and how, ultimately, the combination of all three can help us develop a coherent overall picture of the network's shape.

  10. Large-scale quantum photonic circuits in silicon

    Science.gov (United States)

    Harris, Nicholas C.; Bunandar, Darius; Pant, Mihir; Steinbrecher, Greg R.; Mower, Jacob; Prabhu, Mihika; Baehr-Jones, Tom; Hochberg, Michael; Englund, Dirk

    2016-08-01

    Quantum information science offers inherently more powerful methods for communication, computation, and precision measurement that take advantage of quantum superposition and entanglement. In recent years, theoretical and experimental advances in quantum computing and simulation with photons have spurred great interest in developing large photonic entangled states that challenge today's classical computers. As experiments have increased in complexity, there has been an increasing need to transition bulk optics experiments to integrated photonics platforms to control more spatial modes with higher fidelity and phase stability. The silicon-on-insulator (SOI) nanophotonics platform offers new possibilities for quantum optics, including the integration of bright, nonclassical light sources, based on the large third-order nonlinearity (χ(3)) of silicon, alongside quantum state manipulation circuits with thousands of optical elements, all on a single phase-stable chip. How large do these photonic systems need to be? Recent theoretical work on Boson Sampling suggests that even the problem of sampling from e30 identical photons, having passed through an interferometer of hundreds of modes, becomes challenging for classical computers. While experiments of this size are still challenging, the SOI platform has the required component density to enable low-loss and programmable interferometers for manipulating hundreds of spatial modes. Here, we discuss the SOI nanophotonics platform for quantum photonic circuits with hundreds-to-thousands of optical elements and the associated challenges. We compare SOI to competing technologies in terms of requirements for quantum optical systems. We review recent results on large-scale quantum state evolution circuits and strategies for realizing high-fidelity heralded gates with imperfect, practical systems. Next, we review recent results on silicon photonics-based photon-pair sources and device architectures, and we discuss a path towards

  11. The Large-scale Component of Mantle Convection

    Science.gov (United States)

    Cserepes, L.

    Circulation in the Earth's mantle occurs on multiple spatial scales: this review dis- cusses the character of its large-scale or global components. Direct and strong evi- dence concerning the global flow comes, first of all, from the pattern of plate motion. Further indirect observational data which can be transformed into flow velocities by the equation of motion are the internal density heterogeneities revealed by seismic to- mography, and the geoid can also be used as an observational constraint. Due to their limited spatial resolution, global tomographic data automatically filter out the small- scale features and are therefore relevant to the global flow pattern. Flow solutions obtained from tomographic models, using the plate motion as boundary condition, re- veal that subduction is the downwelling of the global mantle circulation and that the deep-rooted upwellings are concentrated in 2-3 superplumes. Spectral analysis of the tomographic heterogeneities shows that the power of global flow appears dominantly in the lowest spherical harmonic orders 2-5. Theoretical convection calculations con- tribute substantially to the understanding of global flow. If basal heating of the mantle is significant, numerical models can reproduce the basic 2 to 5 cell pattern of con- vection even without the inclusion of surface plates. If plates are superimposed on the solution with their present arrangement and motion, the dominance of these low spherical harmonic orders is more pronounced. The cells are not necessarily closed, rather they show chaotic time-dependence, but they are normally bordered by long downwelling features, and they have usually a single superplume in the cell interior. Swarms of small plumes can develop in the large cells, especially when convection is partially layered due to an internal boundary such as the 670 km discontinuity (source of small plumes). These small plumes are usually tilted by the background large-scale flow which shows that they are

  12. High Fidelity Simulations of Large-Scale Wireless Networks

    Energy Technology Data Exchange (ETDEWEB)

    Onunkwo, Uzoma [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Benz, Zachary [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-11-01

    The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulations (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.

  13. Near optimal bispectrum estimators for large-scale structure

    Science.gov (United States)

    Schmittfull, Marcel; Baldauf, Tobias; Seljak, Uroš

    2015-02-01

    Clustering of large-scale structure provides significant cosmological information through the power spectrum of density perturbations. Additional information can be gained from higher-order statistics like the bispectrum, especially to break the degeneracy between the linear halo bias b1 and the amplitude of fluctuations σ8. We propose new simple, computationally inexpensive bispectrum statistics that are near optimal for the specific applications like bias determination. Corresponding to the Legendre decomposition of nonlinear halo bias and gravitational coupling at second order, these statistics are given by the cross-spectra of the density with three quadratic fields: the squared density, a tidal term, and a shift term. For halos and galaxies the first two have associated nonlinear bias terms b2 and bs2 , respectively, while the shift term has none in the absence of velocity bias (valid in the k →0 limit). Thus the linear bias b1 is best determined by the shift cross-spectrum, while the squared density and tidal cross-spectra mostly tighten constraints on b2 and bs2 once b1 is known. Since the form of the cross-spectra is derived from optimal maximum-likelihood estimation, they contain the full bispectrum information on bias parameters. Perturbative analytical predictions for their expectation values and covariances agree with simulations on large scales, k ≲0.09 h /Mpc at z =0.55 with Gaussian R =20 h-1 Mpc smoothing, for matter-matter-matter, and matter-matter-halo combinations. For halo-halo-halo cross-spectra the model also needs to include corrections to the Poisson stochasticity.

  14. Large-scale mass distribution in the Illustris simulation

    Science.gov (United States)

    Haider, M.; Steinhauser, D.; Vogelsberger, M.; Genel, S.; Springel, V.; Torrey, P.; Hernquist, L.

    2016-04-01

    Observations at low redshifts thus far fail to account for all of the baryons expected in the Universe according to cosmological constraints. A large fraction of the baryons presumably resides in a thin and warm-hot medium between the galaxies, where they are difficult to observe due to their low densities and high temperatures. Cosmological simulations of structure formation can be used to verify this picture and provide quantitative predictions for the distribution of mass in different large-scale structure components. Here we study the distribution of baryons and dark matter at different epochs using data from the Illustris simulation. We identify regions of different dark matter density with the primary constituents of large-scale structure, allowing us to measure mass and volume of haloes, filaments and voids. At redshift zero, we find that 49 per cent of the dark matter and 23 per cent of the baryons are within haloes more massive than the resolution limit of 2 × 108 M⊙. The filaments of the cosmic web host a further 45 per cent of the dark matter and 46 per cent of the baryons. The remaining 31 per cent of the baryons reside in voids. The majority of these baryons have been transported there through active galactic nuclei feedback. We note that the feedback model of Illustris is too strong for heavy haloes, therefore it is likely that we are overestimating this amount. Categorizing the baryons according to their density and temperature, we find that 17.8 per cent of them are in a condensed state, 21.6 per cent are present as cold, diffuse gas, and 53.9 per cent are found in the state of a warm-hot intergalactic medium.

  15. Sheltering in buildings from large-scale outdoor releases

    Energy Technology Data Exchange (ETDEWEB)

    Chan, W.R.; Price, P.N.; Gadgil, A.J.

    2004-06-01

    Intentional or accidental large-scale airborne toxic release (e.g. terrorist attacks or industrial accidents) can cause severe harm to nearby communities. Under these circumstances, taking shelter in buildings can be an effective emergency response strategy. Some examples where shelter-in-place was successful at preventing injuries and casualties have been documented [1, 2]. As public education and preparedness are vital to ensure the success of an emergency response, many agencies have prepared documents advising the public on what to do during and after sheltering [3, 4, 5]. In this document, we will focus on the role buildings play in providing protection to occupants. The conclusions to this article are: (1) Under most circumstances, shelter-in-place is an effective response against large-scale outdoor releases. This is particularly true for release of short duration (a few hours or less) and chemicals that exhibit non-linear dose-response characteristics. (2) The building envelope not only restricts the outdoor-indoor air exchange, but can also filter some biological or even chemical agents. Once indoors, the toxic materials can deposit or sorb onto indoor surfaces. All these processes contribute to the effectiveness of shelter-in-place. (3) Tightening of building envelope and improved filtration can enhance the protection offered by buildings. Common mechanical ventilation system present in most commercial buildings, however, should be turned off and dampers closed when sheltering from an outdoor release. (4) After the passing of the outdoor plume, some residuals will remain indoors. It is therefore important to terminate shelter-in-place to minimize exposure to the toxic materials.

  16. On soft limits of large-scale structure correlation functions

    Energy Technology Data Exchange (ETDEWEB)

    Sagunski, Laura

    2016-08-15

    Large-scale structure surveys have the potential to become the leading probe for precision cosmology in the next decade. To extract valuable information on the cosmological evolution of the Universe from the observational data, it is of major importance to derive accurate theoretical predictions for the statistical large-scale structure observables, such as the power spectrum and the bispectrum of (dark) matter density perturbations. Hence, one of the greatest challenges of modern cosmology is to theoretically understand the non-linear dynamics of large-scale structure formation in the Universe from first principles. While analytic approaches to describe the large-scale structure formation are usually based on the framework of non-relativistic cosmological perturbation theory, we pursue another road in this thesis and develop methods to derive generic, non-perturbative statements about large-scale structure correlation functions. We study unequal- and equal-time correlation functions of density and velocity perturbations in the limit where one of their wavenumbers becomes small, that is, in the soft limit. In the soft limit, it is possible to link (N+1)-point and N-point correlation functions to non-perturbative 'consistency conditions'. These provide in turn a powerful tool to test fundamental aspects of the underlying theory at hand. In this work, we first rederive the (resummed) consistency conditions at unequal times by using the so-called eikonal approximation. The main appeal of the unequal-time consistency conditions is that they are solely based on symmetry arguments and thus are universal. Proceeding from this, we direct our attention to consistency conditions at equal times, which, on the other hand, depend on the interplay between soft and hard modes. We explore the existence and validity of equal-time consistency conditions within and beyond perturbation theory. For this purpose, we investigate the predictions for the soft limit of the

  17. Assessing Programming Costs of Explicit Memory Localization on a Large Scale Shared Memory Multiprocessor

    Directory of Open Access Journals (Sweden)

    Silvio Picano

    1992-01-01

    Full Text Available We present detailed experimental work involving a commercially available large scale shared memory multiple instruction stream-multiple data stream (MIMD parallel computer having a software controlled cache coherence mechanism. To make effective use of such an architecture, the programmer is responsible for designing the program's structure to match the underlying multiprocessors capabilities. We describe the techniques used to exploit our multiprocessor (the BBN TC2000 on a network simulation program, showing the resulting performance gains and the associated programming costs. We show that an efficient implementation relies heavily on the user's ability to explicitly manage the memory system.

  18. Minimization of Linear Functionals Defined on| Solutions of Large-Scale Discrete Ill-Posed Problems

    DEFF Research Database (Denmark)

    Elden, Lars; Hansen, Per Christian; Rojas, Marielba

    2003-01-01

    The minimization of linear functionals de ned on the solutions of discrete ill-posed problems arises, e.g., in the computation of con dence intervals for these solutions. In 1990, Elden proposed an algorithm for this minimization problem based on a parametric-programming reformulation involving...... the solution of a sequence of trust-region problems, and using matrix factorizations. In this paper, we describe MLFIP, a large-scale version of this algorithm where a limited-memory trust-region solver is used on the subproblems. We illustrate the use of our algorithm in connection with an inverse heat...

  19. The Evolution and Natural State of Large-Scale Vapor-Dominated Zones

    Energy Technology Data Exchange (ETDEWEB)

    Ingebritsen, S.E.

    1986-01-21

    Numerical simulation is used to define the rather special conditions under which large-scale vapor-dominated zones can evolve. Given an adequate supply of heat, a vapor-dominated zone can evolve within low-permeability barriers without changes in rock properties or boundary conditions. However, the evolution of the system is accelerated in cases involving an initially high fluid throughflow rate that decreases with time. Near-steady-state pressures within the vapor-dominated zone are shown to vary with depth to the caprock.

  20. Output Regulation of Large-Scale Hydraulic Networks with Minimal Steady State Power Consumption

    DEFF Research Database (Denmark)

    Jensen, Tom Nørgaard; Wisniewski, Rafal; De Persis, Claudio;

    2014-01-01

    that the system is overactuated is exploited for minimizing the steady state electrical power consumption of the pumps in the system, while output regulation is maintained. The proposed control actions are decentralized in order to make changes in the structure of the hydraulic network easy to implement.......An industrial case study involving a large-scale hydraulic network is examined. The hydraulic network underlies a district heating system, with an arbitrary number of end-users. The problem of output regulation is addressed along with a optimization criterion for the control. The fact...

  1. Sex Trade Involvement in Sao Paulo, Brazil and Toronto, Canada: Narratives of Social Exclusion and Fragmented Identities

    Science.gov (United States)

    Kidd, Sean A.; Liborio, Renata Maria Coimbra

    2011-01-01

    An extensive international literature has been developed regarding the risk trajectories of sex trade-involved children and youth. This literature has not, however, substantially incorporated the narratives of youths regarding their experiences. In this article, the contemporary literature on child and youth sex trade-involvement is reviewed and…

  2. Energy from the desert very large scale PV power : state of the art and into the future

    CERN Document Server

    Komoto, Keiichi; Cunow, Edwin; Megherbi, Karim; Faiman, David; van der Vleuten, Peter

    2013-01-01

    The fourth volume in the established Energy from the Desert series examines and evaluates the potential and feasibility of Very Large Scale Photovoltaic Power Generation (VLS-PV) systems, which have capacities ranging from several megawatts to gigawatts, and to develop practical project proposals toward implementing the VLS-PV systems in the future. It comprehensively analyses all major issues involved in such large scale applications, based on the latest scientific and technological developments by means of close international co-operation with experts from different countries. From t

  3. Large-scale simulations of layered double hydroxide nanocomposite materials

    Science.gov (United States)

    Thyveetil, Mary-Ann

    Layered double hydroxides (LDHs) have the ability to intercalate a multitude of anionic species. Atomistic simulation techniques such as molecular dynamics have provided considerable insight into the behaviour of these materials. We review these techniques and recent algorithmic advances which considerably improve the performance of MD applications. In particular, we discuss how the advent of high performance computing and computational grids has allowed us to explore large scale models with considerable ease. Our simulations have been heavily reliant on computational resources on the UK's NGS (National Grid Service), the US TeraGrid and the Distributed European Infrastructure for Supercomputing Applications (DEISA). In order to utilise computational grids we rely on grid middleware to launch, computationally steer and visualise our simulations. We have integrated the RealityGrid steering library into the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) 1 . which has enabled us to perform re mote computational steering and visualisation of molecular dynamics simulations on grid infrastruc tures. We also use the Application Hosting Environment (AHE) 2 in order to launch simulations on remote supercomputing resources and we show that data transfer rates between local clusters and super- computing resources can be considerably enhanced by using optically switched networks. We perform large scale molecular dynamics simulations of MgiAl-LDHs intercalated with either chloride ions or a mixture of DNA and chloride ions. The systems exhibit undulatory modes, which are suppressed in smaller scale simulations, caused by the collective thermal motion of atoms in the LDH layers. Thermal undulations provide elastic properties of the system including the bending modulus, Young's moduli and Poisson's ratios. To explore the interaction between LDHs and DNA. we use molecular dynamics techniques to per form simulations of double stranded, linear and plasmid DNA up

  4. Alignment between galaxies and large-scale structure

    Institute of Scientific and Technical Information of China (English)

    A. Faltenbacher; Cheng Li; Simon D. M. White; Yi-Peng Jing; Shu-De Mao; Jie Wang

    2009-01-01

    Based on the Sloan Digital Sky Survey DR6 (SDSS) and the Millennium Simulation (MS), we investigate the alignment between galaxies and large-scale struc-ture. For this purpose, we develop two new statistical tools, namely the alignment cor-relation function and the cos(20)-statistic. The former is a two-dimensional extension of the traditional two-point correlation function and the latter is related to the ellipticity correlation function used for cosmic shear measurements. Both are based on the cross correlation between a sample of galaxies with orientations and a reference sample which represents the large-scale structure. We apply the new statistics to the SDSS galaxy cat-alog. The alignment correlation function reveals an overabundance of reference galaxies along the major axes of red, luminous (L L*) galaxies out to projected separations of 60 h-1Mpc. The signal increases with central galaxy luminosity. No alignment signal is detected for blue galaxies. The cos(2θ)-statistic yields very similar results. Starting from a MS semi-analytic galaxy catalog, we assign an orientation to each red, luminous and central galaxy, based on that of the central region of the host halo (with size similar to that of the stellar galaxy). As an alternative, we use the orientation of the host halo itself. We find a mean projected misalignment between a halo and its central region of ~ 25°. The misalignment decreases slightly with increasing luminosity of the central galaxy. Using the orientations and luminosities of the semi-analytic galaxies, we repeat our alignment analysis on mock surveys of the MS. Agreement with the SDSS results is good if the central orientations are used. Predictions using the halo orientations as proxies for cen-tral galaxy orientations overestimate the observed alignment by more than a factor of 2. Finally, the large volume of the MS allows us to generate a two-dimensional map of the alignment correlation function, which shows the reference galaxy

  5. Local and Regional Impacts of Large Scale Wind Energy Deployment

    Science.gov (United States)

    Michalakes, J.; Hammond, S.; Lundquist, J. K.; Moriarty, P.; Robinson, M.

    2010-12-01

    The U.S. is currently on a path to produce 20% of its electricity from wind energy by 2030, almost a 10-fold increase over present levels of electricity generated from wind. Such high-penetration wind energy deployment will entail extracting elevated energy levels from the planetary boundary layer and preliminary studies indicate that this will have significant but uncertain impacts on the local and regional environment. State and federal regulators have raised serious concerns regarding potential agricultural impacts from large farms deployed throughout the Midwest where agriculture is the basis of the local economy. The effects of large wind farms have been proposed to be both beneficial (drying crops to reduce occurrences of fungal diseases, avoiding late spring freezes, enhancing pollen viability, reducing dew duration) and detrimental (accelerating moisture loss during drought) with no conclusive investigations thus far. As both wind and solar technologies are deployed at scales required to replace conventional technologies, there must be reasonable certainty that the potential environmental impacts at the micro, macro, regional and global scale do not exceed those anticipated from carbon emissions. Largely because of computational limits, the role of large wind farms in affecting regional-scale weather patterns has only been investigated in coarse simulations and modeling tools do not yet exist which are capable of assessing the downwind affects of large wind farms may have on microclimatology. In this presentation, we will outline the vision for and discuss technical and scientific challenges in developing a multi-model high-performance simulation capability covering the range of mesoscale to sub-millimeter scales appropriate for assessing local, regional, and ultimately global environmental impacts and quantifying uncertainties of large scale wind energy deployment scenarios. Such a system will allow continuous downscaling of atmospheric processes on wind

  6. Large scale solar district heating. Evaluation, modelling and designing

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the tool for design studies and on a local energy planning case. The evaluation of the central solar heating technology is based on measurements on the case plant in Marstal, Denmark, and on published and unpublished data for other, mainly Danish, CSDHP plants. Evaluations on the thermal, economical and environmental performances are reported, based on the experiences from the last decade. The measurements from the Marstal case are analysed, experiences extracted and minor improvements to the plant design proposed. For the detailed designing and energy planning of CSDHPs, a computer simulation model is developed and validated on the measurements from the Marstal case. The final model is then generalised to a 'generic' model for CSDHPs in general. The meteorological reference data, Danish Reference Year, is applied to find the mean performance for the plant designs. To find the expectable variety of the thermal performance of such plants, a method is proposed where data from a year with poor solar irradiation and a year with strong solar irradiation are applied. Equipped with a simulation tool design studies are carried out spreading from parameter analysis over energy planning for a new settlement to a proposal for the combination of plane solar collectors with high performance solar collectors, exemplified by a trough solar collector. The methodology of utilising computer simulation proved to be a cheap and relevant tool in the design of future solar heating plants. The thesis also exposed the demand for developing computer models for the more advanced solar collector designs and especially for the control operation of CSHPs. In the final chapter the CSHP technology is put into perspective with respect to other possible technologies to find the relevance of the application

  7. Large scale stochastic spatio-temporal modelling with PCRaster

    Science.gov (United States)

    Karssenberg, Derek; Drost, Niels; Schmitz, Oliver; de Jong, Kor; Bierkens, Marc F. P.

    2013-04-01

    software from the eScience Technology Platform (eSTeP), developed at the Netherlands eScience Center. This will allow us to scale up to hundreds of machines, with thousands of compute cores. A key requirement is not to change the user experience of the software. PCRaster operations and the use of the Python framework classes should work in a similar manner on machines ranging from a laptop to a supercomputer. This enables a seamless transfer of models from small machines, where model development is done, to large machines used for large-scale model runs. Domain specialists from a large range of disciplines, including hydrology, ecology, sedimentology, and land use change studies, currently use the PCRaster Python software within research projects. Applications include global scale hydrological modelling and error propagation in large-scale land use change models. The software runs on MS Windows, Linux operating systems, and OS X.

  8. Large-scale bias of dark matter halos

    CERN Document Server

    Valageas, Patrick

    2010-01-01

    We build a simple analytical model for the bias of dark matter halos that applies to objects defined by an arbitrary density threshold, $200\\leq\\delta\\leq 1600$, and that provides accurate predictions from low-mass to high-mass halos. We point out that it is possible to build simple and efficient models, with no free parameter for the halo bias, by using integral constraints that govern the behavior of low-mass and typical halos, whereas the properties of rare massive halos are derived through explicit asymptotic approaches. We also describe how to take into account the impact of halo motions on their bias, using their linear displacement field. We obtain a good agreement with numerical simulations for the halo mass functions and large-scale bias at redshifts $0\\leq z \\leq 2.5$, for halos defined by nonlinear density threshold $200\\leq\\delta\\leq 1600$. We also evaluate the impact on the halo bias of two common approximations, i) neglecting halo motions, and ii) linearizing the halo two-point correlation.

  9. Experience analyzing wind data for large-scale integration

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Zhi; Dai, RenChang; Freeman, Lavelle A.; Miller, Nicholas W.; Shao, Miaolei [GEe Energy Consulting Group, Schenectady, NY (United States)

    2010-07-01

    Wind is a major piece of the green energy effort, and will certainly play a more important role in the future power industry. GE Energy has conducted a numer of large-scale renewable integration studies in North America. The objective of these studies is to understand how integrating large amounts of variable energy resources into the supply mix affects grid operation and economics. As part of this effort, various statistical analyses were performed to characterize the variability and uncertainty of wind generation. Based on the results of this characterization, further engineering and economic studies are performed to assess operational requirements, costs, and savings attributable to wind resources. For these analyses, a large amount of input data is usually required, and is often obtained in different formats. These data sets are not very intuitive at first glance, and need extensive effort to be developed into something informative. Based on project experience, different methods have been developed to explore and extrapolate the information hidden within large amounts of raw data. Algorithms and macros have been written to validate and correct data, to create summary information, and to produce derived data sets for further analyses. Informative plots and charts have also been programmed into various applications to provide quick, useful analysis when needed. This article introduces some illustrative and easy-to-analyze ways to look at these data using readily available tools. (orig.)

  10. Silver nanoparticles: Large scale solvothermal synthesis and optical properties

    Energy Technology Data Exchange (ETDEWEB)

    Wani, Irshad A.; Khatoon, Sarvari [Nanochemistry Laboratory, Department of Chemistry, Jamia Millia Islamia, New Delhi 110025 (India); Ganguly, Aparna [Nanochemistry Laboratory, Department of Chemistry, Jamia Millia Islamia, New Delhi 110025 (India); Department of Chemistry, Indian Institute of Technology, Hauz Khas, New Delhi 110016 (India); Ahmed, Jahangeer; Ganguli, Ashok K. [Department of Chemistry, Indian Institute of Technology, Hauz Khas, New Delhi 110016 (India); Ahmad, Tokeer, E-mail: tokeer.ch@jmi.ac.in [Nanochemistry Laboratory, Department of Chemistry, Jamia Millia Islamia, New Delhi 110025 (India)

    2010-08-15

    Silver nanoparticles have been successfully synthesized by a simple and modified solvothermal method at large scale using ethanol as the refluxing solvent and NaBH{sub 4} as reducing agent. The nanopowder was investigated by means of X-ray diffraction (XRD), transmission electron microscopy (TEM), dynamic light scattering (DLS), UV-visible and BET surface area studies. XRD studies reveal the monophasic nature of these highly crystalline silver nanoparticles. Transmission electron microscopic studies show the monodisperse and highly uniform nanoparticles of silver of the particle size of 5 nm, however, the size is found to be 7 nm using dynamic light scattering which is in good agreement with the TEM and X-ray line broadening studies. The surface area was found to be 34.5 m{sup 2}/g. UV-visible studies show the absorption band at {approx}425 nm due to surface plasmon resonance. The percentage yield of silver nanoparticles was found to be as high as 98.5%.

  11. Analysis of the large-scale structure of the Universe

    Energy Technology Data Exchange (ETDEWEB)

    Doroshkevich, A.G.; Kotok, E.V.; Shandarin, S.F.; Sigov, Yu.S. (AN SSSR, Moscow. Inst. of Applied Mathematics)

    1983-02-01

    A method of calculation of the large-scale structure of the Universe based on the adiabatic theory (A-theory) of its formation is proposed. The initial spectrum of perturbation is related to some observable parameters of the structure, which are objectively defined as a set of regions enclosed by the border of constant density rho=rhosub(c) (rhosub(c) is a free parameter of the theory). The parameters are: (1) W is the fraction of matter within the regions of high density rho > rhosub(c); (2) L is the mean size of a region defined as the diameter of the circle circumscribed around the region; (3) D is a mean separation of dense regions taken along a straight line and (4) n is a mean number of dense regions in a unit area. Equations relate these parameters to the fundamental length which is associated with the initial spectrum. The conclusions of the theory are checked by several numerical models and are applied to observational parameters.

  12. Large Scale Structure in the SDSS Galaxy Survey

    CERN Document Server

    Doroshkevich, A G; Tucker, D L

    2004-01-01

    The Large Scale Structure (LSS) in the galaxy distribution is investigated using the Sloan Digital Sky Survey Early Data Release (SDSS EDR). Using the Minimal Spanning Tree technique we have extracted sets of filaments, of wall-like structures, of galaxy groups, and of rich clusters from this unique sample. The physical properties of these structures were then measured and compared with the expectations from Zel'dovich' theory. The measured characteristics of galaxy walls were found to be consistent with those for a spatially flat $\\Lambda$CDM cosmological model with $\\Omega_m\\approx$ 0.3 and $\\Omega_\\Lambda \\approx$ 0.7, and for Gaussian initial perturbations with a Harrison -- Zel'dovich power spectrum. Furthermore, we found that the mass functions of groups and of unrelaxed structure elements generally fit well with the expectations from Zel'dovich' theory, although there was some discrepancy for lower mass groups which may be due to incompleteness in the selected sample of groups. We also note that both g...

  13. Large Scale Structure in the SDSS DR1 Galaxy Survey

    CERN Document Server

    Doroshkevich, A G; Allam, S S; Way, M J

    2003-01-01

    The Large Scale Structure in the galaxy distribution is investigated using The First Data Release of the Sloan Digital Sky Survey. Using the Minimal Spanning Tree technique we have extracted sets of filaments, of wall--like structures, of galaxy groups, and of rich clusters from this unique sample. The physical properties of these structures were then measured and compared with the statistical expectations based on the Zel'dovich' theory. The measured characteristics of galaxy walls were found to be consistent with those for a spatially flat $\\Lambda$CDM cosmological model with $\\Omega_m\\approx$ 0.3 and $\\Omega_\\Lambda \\approx$ 0.7, and for Gaussian initial perturbations with a Harrison -- Zel'dovich power spectrum. Furthermore, we found that the mass functions of groups and of unrelaxed structure elements generally fit well with the expectations from Zel'dovich' theory. We also note that both groups and rich clusters tend to prefer the environments of walls, which tend to be of higher density, rather than th...

  14. Nonlinear density fluctuation field theory for large scale structure

    Institute of Scientific and Technical Information of China (English)

    Yang Zhang; Hai-Xing Miao

    2009-01-01

    We develop an effective field theory of density fluctuations for a Newtonian self-gravitating N-body system in quasi-equilibrium and apply it to a homogeneous uni-verse with small density fluctuations. Keeping the density fluctuations up to second or-der, we obtain the nonlinear field equation of 2-pt correlation ξ(r), which contains 3-pt correlation and formal ultra-violet divergences. By the Groth-Peebles hierarchical ansatz and mass renormalization, the equation becomes closed with two new terms beyond the Gaussian approximation, and their coefficients are taken as parameters. The analytic solu-tion is obtained in terms of the hypergeometric functions, which is checked numerically.With one single set of two fixed parameters, the correlation ξ(r) and the corresponding power spectrum P(k) simultaneously match the results from all the major surveys, such as APM, SDSS, 2dfGRS, and REFLEX. The model gives a unifying understanding of several seemingly unrelated features of large scale structure from a field-theoretical per-spective. The theory is worth extending to study the evolution effects in an expanding universe.

  15. Parallel Index and Query for Large Scale Data Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chou, Jerry; Wu, Kesheng; Ruebel, Oliver; Howison, Mark; Qiang, Ji; Prabhat,; Austin, Brian; Bethel, E. Wes; Ryne, Rob D.; Shoshani, Arie

    2011-07-18

    Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing of a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.

  16. Large-scale functional purification of recombinant HIV-1 capsid.

    Directory of Open Access Journals (Sweden)

    Magdeleine Hung

    Full Text Available During human immunodeficiency virus type-1 (HIV-1 virion maturation, capsid proteins undergo a major rearrangement to form a conical core that protects the viral nucleoprotein complexes. Mutations in the capsid sequence that alter the stability of the capsid core are deleterious to viral infectivity and replication. Recently, capsid assembly has become an attractive target for the development of a new generation of anti-retroviral agents. Drug screening efforts and subsequent structural and mechanistic studies require gram quantities of active, homogeneous and pure protein. Conventional means of laboratory purification of Escherichia coli expressed recombinant capsid protein rely on column chromatography steps that are not amenable to large-scale production. Here we present a function-based purification of wild-type and quadruple mutant capsid proteins, which relies on the inherent propensity of capsid protein to polymerize and depolymerize. This method does not require the packing of sizable chromatography columns and can generate double-digit gram quantities of functionally and biochemically well-behaved proteins with greater than 98% purity. We have used the purified capsid protein to characterize two known assembly inhibitors in our in-house developed polymerization assay and to measure their binding affinities. Our capsid purification procedure provides a robust method for purifying large quantities of a key protein in the HIV-1 life cycle, facilitating identification of the next generation anti-HIV agents.

  17. Development of large-scale functional networks over the lifespan.

    Science.gov (United States)

    Schlee, Winfried; Leirer, Vera; Kolassa, Stephan; Thurm, Franka; Elbert, Thomas; Kolassa, Iris-Tatjana

    2012-10-01

    The development of large-scale functional organization of the human brain across the lifespan is not well understood. Here we used magnetoencephalographic recordings of 53 adults (ages 18-89) to characterize functional brain networks in the resting state. Slow frequencies engage larger networks than higher frequencies and show different development over the lifespan. Networks in the delta (2-4 Hz) frequency range decrease, while networks in the beta/gamma frequency range (> 16 Hz) increase in size with advancing age. Results show that the right frontal lobe and the temporal areas in both hemispheres are important relay stations in the expanding high-frequency networks. Neuropsychological tests confirmed the tendency of cognitive decline with older age. The decrease in visual memory and visuoconstructive functions was strongly associated with the age-dependent enhancement of functional connectivity in both temporal lobes. Using functional network analysis this study elucidates important neuronal principles underlying age-related cognitive decline paving mental deterioration in senescence.

  18. Deciphering landslide behavior using large-scale flume experiments

    Science.gov (United States)

    Reid, Mark E.; Iverson, Richard M.; Iverson, Neal R.; LaHusen, Richard G.; Brien, Dianne L.; Logan, Matthew

    2008-01-01

    Landslides can be triggered by a variety of hydrologic events and they can exhibit a wide range of movement dynamics. Effective prediction requires understanding these diverse behaviors. Precise evaluation in the field is difficult; as an alternative we performed a series of landslide initiation experiments in the large-scale, USGS debris-flow flume. We systematically investigated the effects of three different hydrologic triggering mechanisms, including groundwater exfiltration from bedrock, prolonged rainfall infiltration, and intense bursts of rain. We also examined the effects of initial soil porosity (loose or dense) relative to the soil’s critical-state porosity. Results show that all three hydrologic mechanisms can instigate landsliding, but water pathways, sensor response patterns, and times to failure differ. Initial soil porosity has a profound influence on landslide movement behavior. Experiments using loose soil show rapid soil contraction during failure, with elevated pore pressures liquefying the sediment and creating fast-moving debris flows. In contrast, dense soil dilated upon shearing, resulting in slow, gradual, and episodic motion. These results have fundamental implications for forecasting landslide behavior and developing effective warning systems.

  19. Large scale filaments associated with Milky Way spiral arms

    CERN Document Server

    Wang, Ke; Ginsburg, Adam; Walmsley, C Malcolm; Molinari, Sergio; Schisano, Eugenio

    2015-01-01

    The ubiquity of filamentary structure at various scales through out the Galaxy has triggered a renewed interest in their formation, evolution, and role in star formation. The largest filaments can reach up to Galactic scale as part of the spiral arm structure. However, such large scale filaments are hard to identify systematically due to limitations in identifying methodology (i.e., as extinction features). We present a new approach to directly search for the largest, coldest, and densest filaments in the Galaxy, making use of sensitive Herschel Hi-GAL data complemented by spectral line cubes. We present a sample of the 9 most prominent Herschel filaments, including 6 identified from a pilot search field plus 3 from outside the field. These filaments measure 37-99 pc long and 0.6-3.0 pc wide with masses (0.5-8.3)$\\times10^4 \\, M_\\odot$, and beam-averaged ($28"$, or 0.4-0.7 pc) peak H$_2$ column densities of (1.7-9.3)$\\times 10^{22} \\, \\rm{cm^{-2}}$. The bulk of the filaments are relatively cold (17-21 K), whi...

  20. Impact of large scale flows on turbulent transport

    Energy Technology Data Exchange (ETDEWEB)

    Sarazin, Y [Association Euratom-CEA, CEA/DSM/DRFC centre de Cadarache, 13108 St-Paul-Lez-Durance (France); Grandgirard, V [Association Euratom-CEA, CEA/DSM/DRFC centre de Cadarache, 13108 St-Paul-Lez-Durance (France); Dif-Pradalier, G [Association Euratom-CEA, CEA/DSM/DRFC centre de Cadarache, 13108 St-Paul-Lez-Durance (France); Fleurence, E [Association Euratom-CEA, CEA/DSM/DRFC centre de Cadarache, 13108 St-Paul-Lez-Durance (France); Garbet, X [Association Euratom-CEA, CEA/DSM/DRFC centre de Cadarache, 13108 St-Paul-Lez-Durance (France); Ghendrih, Ph [Association Euratom-CEA, CEA/DSM/DRFC centre de Cadarache, 13108 St-Paul-Lez-Durance (France); Bertrand, P [LPMIA-Universite Henri Poincare Nancy I, Boulevard des Aiguillettes BP239, 54506 Vandoe uvre-les-Nancy (France); Besse, N [LPMIA-Universite Henri Poincare Nancy I, Boulevard des Aiguillettes BP239, 54506 Vandoe uvre-les-Nancy (France); Crouseilles, N [IRMA, UMR 7501 CNRS/Universite Louis Pasteur, 7 rue Rene Descartes, 67084 Strasbourg (France); Sonnendruecker, E [IRMA, UMR 7501 CNRS/Universite Louis Pasteur, 7 rue Rene Descartes, 67084 Strasbourg (France); Latu, G [LSIIT, UMR 7005 CNRS/Universite Louis Pasteur, Bd Sebastien Brant BP10413, 67412 Illkirch (France); Violard, E [LSIIT, UMR 7005 CNRS/Universite Louis Pasteur, Bd Sebastien Brant BP10413, 67412 Illkirch (France)

    2006-12-15

    The impact of large scale flows on turbulent transport in magnetized plasmas is explored by means of various kinetic models. Zonal flows are found to lead to a non-linear upshift of turbulent transport in a 3D kinetic model for interchange turbulence. Such a transition is absent from fluid simulations, performed with the same numerical tool, which also predict a much larger transport. The discrepancy cannot be explained by zonal flows only, despite they being overdamped in fluids. Indeed, some difference remains, although reduced, when they are artificially suppressed. Zonal flows are also reported to trigger transport barriers in a 4D drift-kinetic model for slab ion temperature gradient (ITG) turbulence. The density gradient acts as a source drive for zonal flows, while their curvature back stabilizes the turbulence. Finally, 5D simulations of toroidal ITG modes with the global and full-f GYSELA code require the equilibrium density function to depend on the motion invariants only. If not, the generated strong mean flows can completely quench turbulent transport.

  1. Constraints on modified Chaplygin gas from large scale structure

    Science.gov (United States)

    Paul, Bikash Chandra; Thakur, Prasenjit; Beesham, Aroon

    2016-10-01

    We study cosmological models with modified Chaplygin gas (MCG) to determine observational constraints on its EoS parameters using the background and the growth tests data. The background test data consists of H(z)-z data, Baryonic Acoustic Oscillations peak parameter, CMB shift parameter, SN Ia data and the growth test data consists of the linear growth function for the large scale structures of the universe are considered to study MCG in favor of dark energy. For a given range of redshift, the Wiggle-Z measurements and rms mass fluctuations from Ly-α data, employed for analyzing cosmological models numerically to constrain the MCG parameters. The Wang-Steinhardt ansatz for the growth index (γ ) and growth function (f) are also considered for numerical analysis. The best-fit values of EoS parameters determined here are used to study the variation of f, growth index (γ ), EoS parameter, squared sound speed and deceleration parameter with redshift. The constraints on the MCG parameters found here are compared with that of GCG (generalized Chaplygin gas) model for viable cosmology. Cosmologies with MCG satisfactorily describe late acceleration followed by a matter dominated phase. The range of values of EoS parameters, the associated parameters (f, γ , ω , Ω, c2s, q) are also determined from observational data in order to understand the suitability of the MCG model.

  2. Large Scale Obscuration and Related Climate Effects Workshop: Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Zak, B.D.; Russell, N.A.; Church, H.W.; Einfeld, W.; Yoon, D.; Behl, Y.K. [eds.

    1994-05-01

    A Workshop on Large Scale Obsurcation and Related Climate Effects was held 29--31 January, 1992, in Albuquerque, New Mexico. The objectives of the workshop were: to determine through the use of expert judgement the current state of understanding of regional and global obscuration and related climate effects associated with nuclear weapons detonations; to estimate how large the uncertainties are in the parameters associated with these phenomena (given specific scenarios); to evaluate the impact of these uncertainties on obscuration predictions; and to develop an approach for the prioritization of further work on newly-available data sets to reduce the uncertainties. The workshop consisted of formal presentations by the 35 participants, and subsequent topical working sessions on: the source term; aerosol optical properties; atmospheric processes; and electro-optical systems performance and climatic impacts. Summaries of the conclusions reached in the working sessions are presented in the body of the report. Copies of the transparencies shown as part of each formal presentation are contained in the appendices (microfiche).

  3. Storm induced large scale TIDs observed in GPS derived TEC

    Directory of Open Access Journals (Sweden)

    C. Borries

    2009-04-01

    Full Text Available This work is a first statistical analysis of large scale traveling ionospheric disturbances (LSTID in Europe using total electron content (TEC data derived from GNSS measurements. The GNSS receiver network in Europe is dense enough to map the ionospheric perturbation TEC with high horizontal resolution. The derived perturbation TEC maps are analysed studying the effect of space weather events on the ionosphere over Europe.

    Equatorward propagating storm induced wave packets have been identified during several geomagnetic storms. Characteristic parameters such as velocity, wavelength and direction were estimated from the perturbation TEC maps. Showing a mean wavelength of 2000 km, a mean period of 59 min and a phase speed of 684 ms−1 in average, the perturbations are allocated to LSTID. The comparison to LSTID observed over Japan shows an equal wavelength but a considerably faster phase speed. This might be attributed to the differences in the distance to the auroral region or inclination/declination of the geomagnetic field lines.

    The observed correlation between the LSTID amplitudes and the Auroral Electrojet (AE indicates that most of the wave like perturbations are exited by Joule heating. Particle precipitation effects could not be separated.

  4. LEMON - LHC Era Monitoring for Large-Scale Infrastructures

    Science.gov (United States)

    Marian, Babik; Ivan, Fedorko; Nicholas, Hook; Hector, Lansdale Thomas; Daniel, Lenkes; Miroslav, Siket; Denis, Waldron

    2011-12-01

    At the present time computer centres are facing a massive rise in virtualization and cloud computing as these solutions bring advantages to service providers and consolidate the computer centre resources. However, as a result the monitoring complexity is increasing. Computer centre management requires not only to monitor servers, network equipment and associated software but also to collect additional environment and facilities data (e.g. temperature, power consumption, cooling efficiency, etc.) to have also a good overview of the infrastructure performance. The LHC Era Monitoring (Lemon) system is addressing these requirements for a very large scale infrastructure. The Lemon agent that collects data on every client and forwards the samples to the central measurement repository provides a flexible interface that allows rapid development of new sensors. The system allows also to report on behalf of remote devices such as switches and power supplies. Online and historical data can be visualized via a web-based interface or retrieved via command-line tools. The Lemon Alarm System component can be used for notifying the operator about error situations. In this article, an overview of the Lemon monitoring is provided together with a description of the CERN LEMON production instance. No direct comparison is made with other monitoring tool.

  5. Global Wildfire Forecasts Using Large Scale Climate Indices

    Science.gov (United States)

    Shen, Huizhong; Tao, Shu

    2016-04-01

    Using weather readings, fire early warning can provided forecast 4-6 hour in advance to minimize fire loss. The benefit would be dramatically enhanced if relatively accurate long-term projection can be also provided. Here we present a novel method for predicting global fire season severity (FSS) at least three months in advance using multiple large-scale climate indices (CIs). The predictive ability is proven effective for various geographic locations and resolution. Globally, as well as in most continents, the El Niño Southern Oscillation (ENSO) is the dominant driving force controlling interannual FSS variability, whereas other CIs also play indispensable roles. We found that a moderate El Niño event is responsible for 465 (272-658 as interquartile range) Tg carbon release and an annual increase of 29,500 (24,500-34,800) deaths from inhalation exposure to air pollutants. Southeast Asia accounts for half of the deaths. Both intercorrelation and interaction of WPs and CIs are revealed, suggesting possible climate-induced modification of fire responses to weather conditions. Our models can benefit fire management in response to climate change.

  6. Testing Inflation with Large Scale Structure: Connecting Hopes with Reality

    CERN Document Server

    Alvarez, Marcelo; Bond, J Richard; Dalal, Neal; de Putter, Roland; Doré, Olivier; Green, Daniel; Hirata, Chris; Huang, Zhiqi; Huterer, Dragan; Jeong, Donghui; Johnson, Matthew C; Krause, Elisabeth; Loverde, Marilena; Meyers, Joel; Meerburg, P Daniel; Senatore, Leonardo; Shandera, Sarah; Silverstein, Eva; Slosar, Anže; Smith, Kendrick; Zaldarriaga, Matias; Assassi, Valentin; Braden, Jonathan; Hajian, Amir; Kobayashi, Takeshi; Stein, George; van Engelen, Alexander

    2014-01-01

    The statistics of primordial curvature fluctuations are our window into the period of inflation, where these fluctuations were generated. To date, the cosmic microwave background has been the dominant source of information about these perturbations. Large scale structure is however from where drastic improvements should originate. In this paper, we explain the theoretical motivations for pursuing such measurements and the challenges that lie ahead. In particular, we discuss and identify theoretical targets regarding the measurement of primordial non-Gaussianity. We argue that when quantified in terms of the local (equilateral) template amplitude $f_{\\rm NL}^{\\rm loc}$ ($f_{\\rm NL}^{\\rm eq}$), natural target levels of sensitivity are $\\Delta f_{\\rm NL}^{\\rm loc, eq.} \\simeq 1$. We highlight that such levels are within reach of future surveys by measuring 2-, 3- and 4-point statistics of the galaxy spatial distribution. This paper summarizes a workshop held at CITA (University of Toronto) on October 23-24, 2014...

  7. Simulated evolution of the dark matter large-scale structure

    CERN Document Server

    Demiański, M; Pilipenko, S; Gottlöber, S

    2011-01-01

    We analyze evolution of the basic properties of simulated large scale structure elements formed by dark matter (DM LSS) and confront it with the observed evolution of the Lyman-$\\alpha$ forest. In three high resolution simulations we selected samples of compact DM clouds of moderate overdensity. Clouds are selected at redshifts $0\\leq z\\leq 3$ with the Minimal Spanning Tree (MST) technique. The main properties of so selected clouds are analyzed in 3D space and with the core sampling approach, what allows us to compare estimates of the DM LSS evolution obtained with two different techniques and to clarify some important aspects of the LSS evolution. In both cases we find that regular redshift variations of the mean characteristics of the DM LSS are accompanied only by small variations of their PDFs, what indicates the self similar character of the DM LSS evolution. The high degree of relaxation of DM particles compressed within the LSS is found along the shortest principal axis of clouds. We see that the inter...

  8. Large-scale comparative visualisation of sets of multidimensional data

    Directory of Open Access Journals (Sweden)

    Dany Vohl

    2016-10-01

    Full Text Available We present encube—a qualitative, quantitative and comparative visualisation and analysis system, with application to high-resolution, immersive three-dimensional environments and desktop displays. encube extends previous comparative visualisation systems by considering: (1 the integration of comparative visualisation and analysis into a unified system; (2 the documentation of the discovery process; and (3 an approach that enables scientists to continue the research process once back at their desktop. Our solution enables tablets, smartphones or laptops to be used as interaction units for manipulating, organising, and querying data. We highlight the modularity of encube, allowing additional functionalities to be included as required. Additionally, our approach supports a high level of collaboration within the physical environment. We show how our implementation of encube operates in a large-scale, hybrid visualisation and supercomputing environment using the CAVE2 at Monash University, and on a local desktop, making it a versatile solution. We discuss how our approach can help accelerate the discovery rate in a variety of research scenarios.

  9. Locating inefficient links in a large-scale transportation network

    Science.gov (United States)

    Sun, Li; Liu, Like; Xu, Zhongzhi; Jie, Yang; Wei, Dong; Wang, Pu

    2015-02-01

    Based on data from geographical information system (GIS) and daily commuting origin destination (OD) matrices, we estimated the distribution of traffic flow in the San Francisco road network and studied Braess's paradox in a large-scale transportation network with realistic travel demand. We measured the variation of total travel time Δ T when a road segment is closed, and found that | Δ T | follows a power-law distribution if Δ T 0. This implies that most roads have a negligible effect on the efficiency of the road network, while the failure of a few crucial links would result in severe travel delays, and closure of a few inefficient links would counter-intuitively reduce travel costs considerably. Generating three theoretical networks, we discovered that the heterogeneously distributed travel demand may be the origin of the observed power-law distributions of | Δ T | . Finally, a genetic algorithm was used to pinpoint inefficient link clusters in the road network. We found that closing specific road clusters would further improve the transportation efficiency.

  10. Large-scale BAO signatures of the smallest galaxies

    Energy Technology Data Exchange (ETDEWEB)

    Dalal, Neal; Pen, Ue-Li [Canadian Institute for Theoretical Astrophyics, University of Toronto, 60 St. George St., Toronto, Ontario M5S 3H8 (Canada); Seljak, Uros, E-mail: neal@cita.utoronto.ca, E-mail: pen@cita.utoronto.ca, E-mail: useljak@berkeley.edu [Physics Department and Lawrence Berkeley National Laboratory, University of California, Berkeley, California 94720 (United States)

    2010-11-01

    Recent work has shown that at high redshift, the relative velocity between dark matter and baryonic gas is typically supersonic. This relative velocity suppresses the formation of the earliest baryonic structures like minihalos, and the suppression is modulated on large scales. This effect imprints a characteristic shape in the clustering power spectrum of the earliest structures, with significant power on ∼ 100 Mpc scales featuring highly pronounced baryon acoustic oscillations. The amplitude of these oscillations is orders of magnitude larger at z ∼ 20 than previously expected. This characteristic signature can allow us to distinguish the effects of minihalos on intergalactic gas at times preceding and during reionization. We illustrate this effect with the example of 21 cm emission and absorption from redshifts during and before reionization. This effect can potentially allow us to probe physics on kpc scales using observations on 100 Mpc scales. We present sensitivity forecasts for FAST and Arecibo. Depending on parameters, this enhanced structure may be detectable by Arecibo at z ∼ 15−20, and with appropriate instrumentation FAST could measure the BAO power spectrum with high precision. In principle, this effect could also pose a serious challenge for efforts to constrain dark energy using observations of the BAO feature at low redshift.

  11. Kernel Projection Algorithm for Large-Scale SVM Problems

    Institute of Scientific and Technical Information of China (English)

    王家琦; 陶卿; 王珏

    2002-01-01

    Support Vector Machine (SVM) has become a very effective method in sta-tistical machine learning and it has proved that training SVM is to solve Nearest Point pairProblem (NPP) between two disjoint closed convex sets. Later Keerthi pointed out that it isdifficult to apply classical excellent geometric algorithms directly to SVM and so designed anew geometric algorithm for SVM. In this article, a new algorithm for geometrically solvingSVM, Kernel Projection Algorithm, is presented based on the theorem on fixed-points of pro-jection mapping. This new algorithm makes it easy to apply classical geometric algorithmsto solving SVM and is more understandable than Keerthi's. Experiments show that the newalgorithm can also handle large-scale SVM problems. Geometric algorithms for SVM, such asKeerthi's algorithm, require that two closed convex sets be disjoint and otherwise the algo-rithms are meaningless. In this article, this requirement will be guaranteed in theory by usingthe theoretic result on universal kernel functions.

  12. Development of a Large Scale, High Speed Wheel Test Facility

    Science.gov (United States)

    Kondoleon, Anthony; Seltzer, Donald; Thornton, Richard; Thompson, Marc

    1996-01-01

    Draper Laboratory, with its internal research and development budget, has for the past two years been funding a joint effort with the Massachusetts Institute of Technology (MIT) for the development of a large scale, high speed wheel test facility. This facility was developed to perform experiments and carry out evaluations on levitation and propulsion designs for MagLev systems currently under consideration. The facility was developed to rotate a large (2 meter) wheel which could operate with peripheral speeds of greater than 100 meters/second. The rim of the wheel was constructed of a non-magnetic, non-conductive composite material to avoid the generation of errors from spurious forces. A sensor package containing a multi-axis force and torque sensor mounted to the base of the station, provides a signal of the lift and drag forces on the package being tested. Position tables mounted on the station allow for the introduction of errors in real time. A computer controlled data acquisition system was developed around a Macintosh IIfx to record the test data and control the speed of the wheel. This paper describes the development of this test facility. A detailed description of the major components is presented. Recently completed tests carried out on a novel Electrodynamic (EDS) suspension system, developed by MIT as part of this joint effort are described and presented. Adaptation of this facility for linear motor and other propulsion and levitation testing is described.

  13. Large-scale Vacuum Vessel Design and Finite Element Analysis

    Institute of Scientific and Technical Information of China (English)

    WANG Wenlong; CAI Guobiao; ZHOU Jianping

    2012-01-01

    The vacuum plume effects experimental system (PES) is the first experimental system designed to study the effects of vacuum plume in China.The main equipment,a vacuum chamber of 5.5 m in diameter and 12.8 m in length,and structure design of hinged door are described.The finite element method (FEM) is adopted to analyze the static strength and stability of the PES vacuum chamber.It is demonstrated that the static strength and stability are qualified.For the 5.5 m diameter vacuum chamber door,three design schemes are put forward.After comparisons are made,the single-axis-double-pin hinged door is selected.The FEM is applied to checking its static strength as well as distortions.The results show that the door's distortion and displacement change mainly due to the gravity of the door which leads to its sinking.The calculated displacement is less than 7.8 mm,while the actual measurement is 5 mm.The single-axis-double-pin hinged door mechanism completely satisfies the design requirements.This innovative structure can be introduced as a reference for the design of large-scale hinged doors.

  14. Interpreting new data on large scale bulk flows

    CERN Document Server

    Watkins, R; Watkins, Richard; Feldman, Hume

    1995-01-01

    We study the implications of a recent estimate of the bulk flow of a set of galaxies containing supernovae type Ia by Riess, Press, and Kirshner. We find that their results are quite consistent with power spectra from several currently popular models of structure formation, but that the sample is as yet too sparse to put significant constraints on the power spectrum. We compare this new result with that of Lauer and Postman, with which there is apparent disagreement. We find that for the power spectra we consider, the difference in window functions between the two samples used for the measurements results in a low level of expected correlation between the estimated bulk flows. We calculate a \\chi^2 for the two measurements taken together and find that their lack of agreement tends to disfavor spectra with excessive power on large scales, but not at a level sufficient to rule them out. A sample consisting of other SN type Ia's found in the Asiago catalog is used to study how the sensitivity of the method used ...

  15. Large-scale experimental design for decentralized SLAM

    Science.gov (United States)

    Cunningham, Alex; Dellaert, Frank

    2012-06-01

    This paper presents an analysis of large scale decentralized SLAM under a variety of experimental conditions to illustrate design trade-offs relevant to multi-robot mapping in challenging environments. As a part of work through the MAST CTA, the focus of these robot teams is on the use of small-scale robots with limited sensing, communication and computational resources. To evaluate mapping algorithms with large numbers (50+) of robots, we developed a simulation incorporating sensing of unlabeled landmarks, line-of-sight blocking obstacles, and communication modeling. Scenarios are randomly generated with variable models for sensing, communication, and robot behavior. The underlying Decentralized Data Fusion (DDF) algorithm in these experiments enables robots to construct a map of their surroundings by fusing local sensor measurements with condensed map information from neighboring robots. Each robot maintains a cache of previously collected condensed maps from neighboring robots, and actively distributes these maps throughout the network to ensure resilience to communication and node failures. We bound the size of the robot neighborhoods to control the growth of the size of neighborhood maps. We present the results of experiments conducted in these simulated scenarios under varying measurement models and conditions while measuring mapping performance. We discuss the trade-offs between mapping performance and scenario design, including robot teams separating and joining, multi-robot data association, exploration bounding, and neighborhood sizes.

  16. Parallel cluster labeling for large-scale Monte Carlo simulations

    CERN Document Server

    Flanigan, M; Flanigan, M; Tamayo, P

    1995-01-01

    We present an optimized version of a cluster labeling algorithm previously introduced by the authors. This algorithm is well suited for large-scale Monte Carlo simulations of spin models using cluster dynamics on parallel computers with large numbers of processors. The algorithm divides physical space into rectangular cells which are assigned to processors and combines a serial local labeling procedure with a relaxation process across nearest-neighbor processors. By controlling overhead and reducing inter-processor communication this method attains good computational speed-up and efficiency. Large systems of up to 65536 X 65536 spins have been simulated at updating speeds of 11 nanosecs/site (90.7 million spin updates/sec) using state-of-the-art supercomputers. In the second part of the article we use the cluster algorithm to study the relaxation of magnetization and energy on large Ising models using Swendsen-Wang dynamics. We found evidence that exponential and power law factors are present in the relaxatio...

  17. Analyzing large-scale proteomics projects with latent semantic indexing.

    Science.gov (United States)

    Klie, Sebastian; Martens, Lennart; Vizcaíno, Juan Antonio; Côté, Richard; Jones, Phil; Apweiler, Rolf; Hinneburg, Alexander; Hermjakob, Henning

    2008-01-01

    Since the advent of public data repositories for proteomics data, readily accessible results from high-throughput experiments have been accumulating steadily. Several large-scale projects in particular have contributed substantially to the amount of identifications available to the community. Despite the considerable body of information amassed, very few successful analyses have been performed and published on this data, leveling off the ultimate value of these projects far below their potential. A prominent reason published proteomics data is seldom reanalyzed lies in the heterogeneous nature of the original sample collection and the subsequent data recording and processing. To illustrate that at least part of this heterogeneity can be compensated for, we here apply a latent semantic analysis to the data contributed by the Human Proteome Organization's Plasma Proteome Project (HUPO PPP). Interestingly, despite the broad spectrum of instruments and methodologies applied in the HUPO PPP, our analysis reveals several obvious patterns that can be used to formulate concrete recommendations for optimizing proteomics project planning as well as the choice of technologies used in future experiments. It is clear from these results that the analysis of large bodies of publicly available proteomics data by noise-tolerant algorithms such as the latent semantic analysis holds great promise and is currently underexploited.

  18. Testing Gravity Using Large-Scale Redshift-Space Distortions

    CERN Document Server

    Raccanelli, Alvise; Pietrobon, Davide; Schmidt, Fabian; Samushia, Lado; Bartolo, Nicola; Doré, Olivier; Matarrese, Sabino; Percival, Will J

    2012-01-01

    We use Luminous Red Galaxies from the Sloan Digital Sky Survey II to test the cosmological structure growth in two alternatives to the standard LCDM+GR cosmological model. We compare observed three-dimensional clustering in SDSS DR7 with theoretical predictions for the standard vanilla LCDM+GR model, Unified Dark Matter cosmologies and the normal branch DGP. In computing the expected correlations in UDM cosmologies, we derive a parameterized formula for the growth factor in these models. For our analysis we apply the methodology tested in Raccanelli et al. 2010 and use the measurements of Samushia et al. 2011, that account for survey geometry, non-linear and wide-angle effects and the distribution of pair orientation. We show that the estimate of the growth rate is potentially degenerate with wide-angle effects, meaning that extremely accurate measurements of the growth rate on large scales will need to take such effects into account. We use measurements of the zeroth and second order moments of the correlati...

  19. Ectopically tethered CP190 induces large-scale chromatin decondensation

    Science.gov (United States)

    Ahanger, Sajad H.; Günther, Katharina; Weth, Oliver; Bartkuhn, Marek; Bhonde, Ramesh R.; Shouche, Yogesh S.; Renkawitz, Rainer

    2014-01-01

    Insulator mediated alteration in higher-order chromatin and/or nucleosome organization is an important aspect of epigenetic gene regulation. Recent studies have suggested a key role for CP190 in such processes. In this study, we analysed the effects of ectopically tethered insulator factors on chromatin structure and found that CP190 induces large-scale decondensation when targeted to a condensed lacO array in mammalian and Drosophila cells. In contrast, dCTCF alone, is unable to cause such a decondensation, however, when CP190 is present, dCTCF recruits it to the lacO array and mediates chromatin unfolding. The CP190 induced opening of chromatin may not be correlated with transcriptional activation, as binding of CP190 does not enhance luciferase activity in reporter assays. We propose that CP190 may mediate histone modification and chromatin remodelling activity to induce an open chromatin state by its direct recruitment or targeting by a DNA binding factor such as dCTCF.

  20. ANALYSIS OF TURBULENT MIXING JETS IN LARGE SCALE TANK

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S; Richard Dimenna, R; Robert Leishear, R; David Stefanko, D

    2007-03-28

    Flow evolution models were developed to evaluate the performance of the new advanced design mixer pump for sludge mixing and removal operations with high-velocity liquid jets in one of the large-scale Savannah River Site waste tanks, Tank 18. This paper describes the computational model, the flow measurements used to provide validation data in the region far from the jet nozzle, the extension of the computational results to real tank conditions through the use of existing sludge suspension data, and finally, the sludge removal results from actual Tank 18 operations. A computational fluid dynamics approach was used to simulate the sludge removal operations. The models employed a three-dimensional representation of the tank with a two-equation turbulence model. Both the computational approach and the models were validated with onsite test data reported here and literature data. The model was then extended to actual conditions in Tank 18 through a velocity criterion to predict the ability of the new pump design to suspend settled sludge. A qualitative comparison with sludge removal operations in Tank 18 showed a reasonably good comparison with final results subject to significant uncertainties in actual sludge properties.

  1. Fast Large Scale Structure Perturbation Theory using 1D FFTs

    CERN Document Server

    Schmittfull, Marcel; McDonald, Patrick

    2016-01-01

    The usual fluid equations describing the large-scale evolution of mass density in the universe can be written as local in the density, velocity divergence, and velocity potential fields. As a result, the perturbative expansion in small density fluctuations, usually written in terms of convolutions in Fourier space, can be written as a series of products of these fields evaluated at the same location in configuration space. Based on this, we establish a new method to numerically evaluate the 1-loop power spectrum (i.e., Fourier transform of the 2-point correlation function) with one-dimensional Fast Fourier Transforms. This is exact and a few orders of magnitude faster than previously used numerical approaches. Numerical results of the new method are in excellent agreement with the standard quadrature integration method. This fast model evaluation can in principle be extended to higher loop order where existing codes become painfully slow. Our approach follows by writing higher order corrections to the 2-point...

  2. Observational constraints on Modified Chaplygin Gas from Large Scale Structure

    CERN Document Server

    Paul, Bikash Chandra; Beesham, Aroonkumar

    2014-01-01

    We study cosmological models with modified Chaplygin gas (in short, MCG) to determine observational constraints on its EoS parameters. The observational data of the background and the growth tests are employed. The background test data namely, H(z)-z data, CMB shift parameter, Baryonic acoustic oscillations (BAO) peak parameter, SN Ia data are considered to study the dynamical aspects of the universe. The growth test data we employ here consists of the linear growth function for the large scale structures of the universe, models are explored assuming MCG as a candidate for dark energy. Considering the observational growth data for a given range of redshift from the Wiggle-Z measurements and rms mass fluctuations from Ly-$\\alpha$ measurements, cosmological models are analyzed numerically to determine constraints on the MCG parameters. In this case, the Wang-Steinhardt ansatz for the growth index $\\gamma$ and growth function $f$ (defined as $f=\\Omega_{m}^{\\gamma} (a)$) are also taken into account for the numeri...

  3. Intensive agriculture erodes β-diversity at large scales.

    Science.gov (United States)

    Karp, Daniel S; Rominger, Andrew J; Zook, Jim; Ranganathan, Jai; Ehrlich, Paul R; Daily, Gretchen C

    2012-09-01

    Biodiversity is declining from unprecedented land conversions that replace diverse, low-intensity agriculture with vast expanses under homogeneous, intensive production. Despite documented losses of species richness, consequences for β-diversity, changes in community composition between sites, are largely unknown, especially in the tropics. Using a 10-year data set on Costa Rican birds, we find that low-intensity agriculture sustained β-diversity across large scales on a par with forest. In high-intensity agriculture, low local (α) diversity inflated β-diversity as a statistical artefact. Therefore, at small spatial scales, intensive agriculture appeared to retain β-diversity. Unlike in forest or low-intensity systems, however, high-intensity agriculture also homogenised vegetation structure over large distances, thereby decoupling the fundamental ecological pattern of bird communities changing with geographical distance. This ~40% decline in species turnover indicates a significant decline in β-diversity at large spatial scales. These findings point the way towards multi-functional agricultural systems that maintain agricultural productivity while simultaneously conserving biodiversity.

  4. Large Scale Synthesis of Carbon Nanofibres on Sodium Chloride Support

    Directory of Open Access Journals (Sweden)

    Ravindra Rajarao

    2012-06-01

    Full Text Available Large scale synthesis of carbon nanofibres (CNFs on a sodium chloride support has been achieved. CNFs have been synthesized using metal oxalate (Ni, Co and Fe as catalyst precursors at 680 C by chemical vapour deposition method. Upon pyrolysis, this catalyst precursors yield catalyst nanoparticles directly. The sodium chloride was used as a catalyst support, it was chosen because of its non‐toxic and water soluble nature. Problems, such as the detrimental effect of CNFs, the detrimental effects on the environment and even cost, have been avoided by using a water soluble support. The structure of products was characterized by scanning electron microscopy, transmission electron microscopy and Raman spectroscopy. The purity of the grown products and purified products were determined by the thermal analysis and X‐ray diffraction method. Here we report the 7600, 7000 and 6500 wt% yield of CNFs synthesized over nickel, cobalt and iron oxalate. The long, curved and worm shaped CNFs were obtained on Ni, Co and Fe catalysts respectively. The lengthy process of calcination and reduction for the preparation of catalysts is avoided in this method. This synthesis route is simple and economical, hence, it can be used for CNF synthesis in industries.

  5. Believability in simplifications of large scale physically based simulation

    KAUST Repository

    Han, Donghui

    2013-01-01

    We verify two hypotheses which are assumed to be true only intuitively in many rigid body simulations. I: In large scale rigid body simulation, viewers may not be able to perceive distortion incurred by an approximated simulation method. II: Fixing objects under a pile of objects does not affect the visual plausibility. Visual plausibility of scenarios simulated with these hypotheses assumed true are measured using subjective rating from viewers. As expected, analysis of results supports the truthfulness of the hypotheses under certain simulation environments. However, our analysis discovered four factors which may affect the authenticity of these hypotheses: number of collisions simulated simultaneously, homogeneity of colliding object pairs, distance from scene under simulation to camera position, and simulation method used. We also try to find an objective metric of visual plausibility from eye-tracking data collected from viewers. Analysis of these results indicates that eye-tracking does not present a suitable proxy for measuring plausibility or distinguishing between types of simulations. © 2013 ACM.

  6. High-throughput solution processing of large-scale graphene

    Science.gov (United States)

    Tung, Vincent C.; Allen, Matthew J.; Yang, Yang; Kaner, Richard B.

    2009-01-01

    The electronic properties of graphene, such as high charge carrier concentrations and mobilities, make it a promising candidate for next-generation nanoelectronic devices. In particular, electrons and holes can undergo ballistic transport on the sub-micrometre scale in graphene and do not suffer from the scale limitations of current MOSFET technologies. However, it is still difficult to produce single-layer samples of graphene and bulk processing has not yet been achieved, despite strenuous efforts to develop a scalable production method. Here, we report a versatile solution-based process for the large-scale production of single-layer chemically converted graphene over the entire area of a silicon/SiO2 wafer. By dispersing graphite oxide paper in pure hydrazine we were able to remove oxygen functionalities and restore the planar geometry of the single sheets. The chemically converted graphene sheets that were produced have the largest area reported to date (up to 20 × 40 µm), making them far easier to process. Field-effect devices have been fabricated by conventional photolithography, displaying currents that are three orders of magnitude higher than previously reported for chemically produced graphene. The size of these sheets enables a wide range of characterization techniques, including optical microscopy, scanning electron microscopy and atomic force microscopy, to be performed on the same specimen.

  7. Parallel Framework for Dimensionality Reduction of Large-Scale Datasets

    Directory of Open Access Journals (Sweden)

    Sai Kiranmayee Samudrala

    2015-01-01

    Full Text Available Dimensionality reduction refers to a set of mathematical techniques used to reduce complexity of the original high-dimensional data, while preserving its selected properties. Improvements in simulation strategies and experimental data collection methods are resulting in a deluge of heterogeneous and high-dimensional data, which often makes dimensionality reduction the only viable way to gain qualitative and quantitative understanding of the data. However, existing dimensionality reduction software often does not scale to datasets arising in real-life applications, which may consist of thousands of points with millions of dimensions. In this paper, we propose a parallel framework for dimensionality reduction of large-scale data. We identify key components underlying the spectral dimensionality reduction techniques, and propose their efficient parallel implementation. We show that the resulting framework can be used to process datasets consisting of millions of points when executed on a 16,000-core cluster, which is beyond the reach of currently available methods. To further demonstrate applicability of our framework we perform dimensionality reduction of 75,000 images representing morphology evolution during manufacturing of organic solar cells in order to identify how processing parameters affect morphology evolution.

  8. Large-scale filaments-newtonian vs. modified dynamics

    CERN Document Server

    Milgrom, M

    1996-01-01

    Eisenstein Loeb and Turner (ELT) have recently proposed a method for estimating the dynamical masses of large-scale filaments, whereby the filament is modeled by an axisymmetric, isothermal cylinder, for which ELT derive a global relation between the (constant) velocity dispersion and the total line density. We first show that the model assumptions of ELT can be relaxed materially: an exact relation between the velocity and line density is derived for any cylinder (not necessarily axisymmetric), with an arbitrary constituent distribution function (so isothermality need not be assumed). We then consider the same problem in the context of the modified dynamics (MOND). After a brief comparison between scaling properties in the two theories, we study idealized MOND model filaments. A preliminary application to the segment of the Perseus-Pisces filament treated by ELT, gives MOND M/L estimates of order 10 s.u., compared with the Newtonian value of about 450, which ELT find. In spite of the large uncertainties stil...

  9. Weighted social networks for a large scale artificial society

    Science.gov (United States)

    Fan, Zong Chen; Duan, Wei; Zhang, Peng; Qiu, Xiao Gang

    2016-12-01

    The method of artificial society has provided a powerful way to study and explain how individual behaviors at micro level give rise to the emergence of global social phenomenon. It also creates the need for an appropriate representation of social structure which usually has a significant influence on human behaviors. It has been widely acknowledged that social networks are the main paradigm to describe social structure and reflect social relationships within a population. To generate social networks for a population of interest, considering physical distance and social distance among people, we propose a generation model of social networks for a large-scale artificial society based on human choice behavior theory under the principle of random utility maximization. As a premise, we first build an artificial society through constructing a synthetic population with a series of attributes in line with the statistical (census) data for Beijing. Then the generation model is applied to assign social relationships to each individual in the synthetic population. Compared with previous empirical findings, the results show that our model can reproduce the general characteristics of social networks, such as high clustering coefficient, significant community structure and small-world property. Our model can also be extended to a larger social micro-simulation as an input initial. It will facilitate to research and predict some social phenomenon or issues, for example, epidemic transition and rumor spreading.

  10. Measuring and correcting wobble in large-scale transmission radiography

    CERN Document Server

    Rogers, Thomas W; Morton, Edward J; Griffin, Lewis D

    2016-01-01

    Large-scale transmission radiography scanners are used to image vehicles and cargo containers. Acquired images are inspected for threats by a human operator or a computer algorithm. To make accurate detections, it is important that image values are precise. However, due to the scale of such systems, they can be mechanically unstable, causing the imaging array to wobble during a scan. This leads to an effective loss of precision in the captured image. We consider the measurement of wobble and amelioration of the consequent loss of image precision. Following our previous work, we use Beam Position Detectors (BPDs) to measure the cross-sectional profile of the X-ray beam, allowing for estimation, and thus correction of wobble. We propose: (i) a model of image formation with a wobbling detector array; (ii) a method of wobble correction derived from this model; (iii) methods for calibrating sensor sensitivities and relative offsets; (iv) a Random Regression Forest based method for instantaneous estimation of detec...

  11. Algorithm of simulation time synchronization over large-scale nodes

    Institute of Scientific and Technical Information of China (English)

    ZHAO QinPing; ZHOU Zhong; Lü Fang

    2008-01-01

    In distributed simulation, there is no uniform physical clock. And delay cannot be estimated because of jitter. So simulation time synchronization is essential for the event consistency among nodes. This paper investigates time synchronization algorithms over large-scale distributed nodes, analyzes LBTS (lower bound time stamp) computation model described in IEEE HLA standard, and then presents a grouped LBTS model. In fact, there is a default premise for existing algorithms that control packets must be delivered via reliable transportation. Although, a theorem of time synchronization message's reliability is proposed, which proves that only those control messages that constrain time advance need reliability. It breaks out the default premise for reliability. Then multicast is introduced into the transmission of control messages, and algorithm MCTS (multi-node coordination time synchronization) is proposed based on multicast. MCTS not only promotes the time advance efficiency, but also reduces the occupied network bandwidth. Experiment results demonstrate that the algorithm is better than others in both time advance speed and occupied network bandwidth. Its time advance speed is about 50 times per second when there are 1000 nodes, approximately equal to that of similar systems when there are 100 nodes.

  12. Large scale structures in liquid crystal/clay colloids

    Energy Technology Data Exchange (ETDEWEB)

    Duijneveldt, Jeroen S van [School of Chemistry, Cantock' s Close, University of Bristol, Bristol BS8 1TS (United Kingdom); Klein, Susanne [HP Laboratories, Filton Road, Stoke Gifford, Bristol BS34 8QZ (United Kingdom); Leach, Edward [HP Laboratories, Filton Road, Stoke Gifford, Bristol BS34 8QZ (United Kingdom); Pizzey, Claire [School of Chemistry, Cantock' s Close, University of Bristol, Bristol BS8 1TS (United Kingdom); Richardson, Robert M [H H Wills Physics Laboratory, Tyndall Avenue, University of Bristol, Bristol BS8 1TL (United Kingdom)

    2005-04-20

    Suspensions of three different clays in K15, a thermotropic liquid crystal, have been studied by optical microscopy and small angle x-ray scattering. The three clays were claytone AF, a surface treated natural montmorillonite, laponite RD, a synthetic hectorite, and mined sepiolite. The claytone and laponite were sterically stabilized whereas sepiolite formed a relatively stable suspension in K15 without any surface treatment. Micrographs of the different suspensions revealed that all three suspensions contained large scale structures. The nature of these aggregates was investigated using small angle x-ray scattering. For the clays with sheet-like particles, claytone and laponite, the flocs contain a mixture of stacked and single platelets. The basal spacing in the stacks was independent of particle concentration in the suspension and the phase of the solvent. The number of platelets in the stack and their percentage in the suspension varied with concentration and the aspect ratio of the platelets. The lath shaped sepiolite did not show any tendency to organize into ordered structures. Here the aggregates are networks of randomly oriented single rods.

  13. Plasma suppression of large scale structure formation in the universe.

    Science.gov (United States)

    Chen, Pisin; Lai, Kwang-Chang

    2007-12-07

    We point out that during the reionization epoch of the cosmic history, the plasma collective effect among the ordinary matter would suppress the large scale structure formation. The imperfect Debye shielding at finite temperature would induce an electrostatic pressure which, working together with the thermal pressure, would counter the gravitational collapse. As a result, the effective Jeans length, lambda[over ]_{J} is increased by a factor lambda[over ]_{J}/lambda_{J}=sqrt[8/5], relative to the conventional one. For scales smaller than the effective Jeans scale the plasma would oscillate at the ion-acoustic frequency. The modes that would be influenced by this effect lie roughly in the range 0.5h Mpc;{-1}

  14. Bio-inspired wooden actuators for large scale applications.

    Directory of Open Access Journals (Sweden)

    Markus Rüggeberg

    Full Text Available Implementing programmable actuation into materials and structures is a major topic in the field of smart materials. In particular the bilayer principle has been employed to develop actuators that respond to various kinds of stimuli. A multitude of small scale applications down to micrometer size have been developed, but up-scaling remains challenging due to either limitations in mechanical stiffness of the material or in the manufacturing processes. Here, we demonstrate the actuation of wooden bilayers in response to changes in relative humidity, making use of the high material stiffness and a good machinability to reach large scale actuation and application. Amplitude and response time of the actuation were measured and can be predicted and controlled by adapting the geometry and the constitution of the bilayers. Field tests in full weathering conditions revealed long-term stability of the actuation. The potential of the concept is shown by a first demonstrator. With the sensor and actuator intrinsically incorporated in the wooden bilayers, the daily change in relative humidity is exploited for an autonomous and solar powered movement of a tracker for solar modules.

  15. Large scale molecular dynamics study of polymer-surfactant complex

    Science.gov (United States)

    Goswami, Monojoy; Sumpter, Bobby

    2012-02-01

    In this work, we study the self-assembly of cationic polyelectrolytes mediated by anionic surfactants in dilute or semi-dilute and gel states. The understanding of the dilute system is a requirement for the understanding of gel states. The importance of polyelectrolyte with oppositely charged colloidal particles can be found in biological systems, such as immobilization of enzymes in polyelectrolyte complexes or nonspecific association of DNA with protein. With the same understanding, interaction of surfactants with polyelectrolytes shows intriguing phenomena that are important for both in academic research as well as industrial applications. Many useful properties of PE surfactant complexes come from the highly ordered structures of surfactant self-assembly inside the PE aggregate. We do large scale molecular dynamics simulation using LAMMPS to understand the structure and dynamics of PE-surfactant systems. Our investigation shows highly ordered ring-string structures that have been observed experimentally in biological systems. We will investigate many different properties of PE-surfactant complexation which will be helpful for pharmaceutical, engineering and biological applications.

  16. Large Scale Structure in the Epoch of Reionization

    Science.gov (United States)

    Koekemoer, Anton; Mould, Jeremy; Cooke, Jeffrey; Wyithe, Stuart; Lidman, Christopher; Trenti, Michele; Abbott, Tim; Kunder, Andrea; Barone-Nugent, Robert; Tescari, Edoardo; Katsianis, Antonios

    2014-02-01

    We propose to capitalize on the high red sensitivity and large field of view of DECam to detect the brightest and rarest galaxies at z=6-7. Our 2012 results show the signature of large scale structure with wavenumber of order 0.1 inverse Mpc in line with expectations of primordial non-gaussianity. But the signal to noise in one deep field from two nights' data is insufficient for a robust conclusion. Ten nights' data will do the job. These data will also constrain the galaxy contribution to reionization by enabling a tighter constraint on the full galaxy luminosity function, including the faint end. The observations will be executed with a cadence and depth that will enable the detection of super-luminous supernovae at z=6-7. Super-luminous supernovae are a recently observed class of supernovae that are 10-100x more luminous than typical supernovae. This class includes pair- instability supernovae that are a rare, third type of supernova explosion in which only 3 events are known. The proposed observations will greatly extend the current reach of supernovae research, examining their occurrence rate and properties near the epoch of reionization.

  17. Scalable Resource Discovery Architecture for Large Scale MANETs

    Directory of Open Access Journals (Sweden)

    Saad Al-Ahmadi

    2014-02-01

    Full Text Available The study conducted a primary investigation into using the Gray cube structure, clustering and Distributed Hash Tables (DHTs to build an efficient virtual network backbone for Resource Discovery (RD tasks in large scale Mobile Ad hoc NET works (MANETs. MANET is an autonomous system of mobile nodes characterized by wireless links. One of the major challenges in MANET is RD protocols responsible for advertising and searching network services. We propose an efficient and scalable RD architecture to meet the challenging requirements of reliable, scalable and power-efficient RD protocol suitable for MANETs with potentially thousands of wireless mobile devices. Our RD is based on virtual network backbone created by dividing the network into several non overlapping localities using multi-hop clustering. In every locality we build a Gray cube with locally adapted dimension. All the Gray cubes are connected through gateways and access points to form virtual backbone used as substrate for DHT operations to distribute, register and locate network resources efficiently. The Gray cube is characterized by low network diameter, low average distance and strong connectivity. We evaluated the proposed RD performance and compared it to some of the well known RD schemes in the literature based on modeling and simulation. The results show the superiority of the proposed RD in terms of delay, load balancing, overloading avoidance, scalability and fault-tolerance.

  18. Large-scale solvothermal synthesis of fluorescent carbon nanoparticles

    Science.gov (United States)

    Ku, Kahoe; Lee, Seung-Wook; Park, Jinwoo; Kim, Nayon; Chung, Haegeun; Han, Chi-Hwan; Kim, Woong

    2014-09-01

    The large-scale production of high-quality carbon nanomaterials is highly desirable for a variety of applications. We demonstrate a novel synthetic route to the production of fluorescent carbon nanoparticles (CNPs) in large quantities via a single-step reaction. The simple heating of a mixture of benzaldehyde, ethanol and graphite oxide (GO) with residual sulfuric acid in an autoclave produced 7 g of CNPs with a quantum yield of 20%. The CNPs can be dispersed in various organic solvents; hence, they are easily incorporated into polymer composites in forms such as nanofibers and thin films. Additionally, we observed that the GO present during the CNP synthesis was reduced. The reduced GO (RGO) was sufficiently conductive (σ ≈ 282 S m-1) such that it could be used as an electrode material in a supercapacitor; in addition, it can provide excellent capacitive behavior and high-rate capability. This work will contribute greatly to the development of efficient synthetic routes to diverse carbon nanomaterials, including CNPs and RGO, that are suitable for a wide range of applications.

  19. 78 FR 7464 - Large Scale Networking (LSN) ; Joint Engineering Team (JET)

    Science.gov (United States)

    2013-02-01

    ... Large Scale Networking (LSN) ; Joint Engineering Team (JET) AGENCY: The Networking and Information... research networking and networking to support science applications. The JET reports to the Large Scale Networking (LSN) Coordinating Group (CG). Public Comments: The government seeks individual input;...

  20. 77 FR 58416 - Large Scale Networking (LSN); Middleware and Grid Interagency Coordination (MAGIC) Team

    Science.gov (United States)

    2012-09-20

    ... Large Scale Networking (LSN); Middleware and Grid Interagency Coordination (MAGIC) Team AGENCY: The Networking and Information Technology Research and Development (NITRD) National Coordination Office (NCO... to the Large Scale Networking (LSN) Coordinating Group (CG). Public Comments: The government...