Sample records for canada involving large-scale

  1. Evidence for large-scale effects of competition: niche displacement in Canada lynx and bobcat. (United States)

    Peers, Michael J L; Thornton, Daniel H; Murray, Dennis L


    Determining the patterns, causes and consequences of character displacement is central to our understanding of competition in ecological communities. However, the majority of competition research has occurred over small spatial extents or focused on fine-scale differences in morphology or behaviour. The effects of competition on broad-scale distribution and niche characteristics of species remain poorly understood but critically important. Using range-wide species distribution models, we evaluated whether Canada lynx (Lynx canadensis) or bobcat (Lynx rufus) were displaced in regions of sympatry. Consistent with our prediction, we found that lynx niches were less similar to those of bobcat in areas of sympatry versus allopatry, with a stronger reliance on snow cover driving lynx niche divergence in the sympatric zone. By contrast, bobcat increased niche breadth in zones of sympatry, and bobcat niches were equally similar to those of lynx in zones of sympatry and allopatry. These findings suggest that competitively disadvantaged species avoid competition at large scales by restricting their niche to highly suitable conditions, while superior competitors expand the diversity of environments used. Our results indicate that competition can manifest within climatic niche space across species' ranges, highlighting the importance of biotic interactions occurring at large spatial scales on niche dynamics.

  2. Emergency preparedness in Canada : case studies on vulnerable populations in large-scale crises

    Energy Technology Data Exchange (ETDEWEB)

    Ng, S.Y.M.


    A 2007 study, conducted by the Canadian Red Cross, assessed the extent to which federal and provincial/territorial emergency management arrangements addressed the needs of high-risk populations and found that current emergency management practices overlooked high-risk populations and noted that further research was required to better understand the needs and specific disaster susceptibilities among high-risk groups. In response to this study, and in order to contribute to remedying this critical gap in emergency management, this report presented several case studies and recommendations, with a particular focus on these high risk groups which include ethnic, cultural and religious minorities. The report discussed the case studies in detail and presented lessons learned. The evaluative criteria for emergency management were also identified. Examples that were cited in the report included Toronto's emergency planning and vulnerable populations; the Red River flood in 1997 in Manitoba; the great ice storm in Kingston in 1998; the 2002 SARS crisis; and Hurricane Juan in 2003. For each of these examples, the report discussed the crisis; pre-disaster management; disaster response; vulnerable populations; the aftermath; and lessons learned. It was concluded that Canada needs to invest in research to understand challenges faced by vulnerable populations in the event of disaster. refs.

  3. Effects of persistence and large-scale climate anomalies on trends and change points in extreme precipitation of Canada (United States)

    Tan, Xuezhi; Gan, Thian Yew; Shao, Dongguo


    Slowly varying (trend) and abrupt (change points) changes in annual maximum daily precipitation (AMP) and seasonal maximum daily precipitation (SMP) across Canada for 223 stations in six regions during four periods (1900-2010, 1930-2010, 1950-2010 and 1970-2010) were analyzed. Variants of the Mann-Kendall (MK) test considering influences of short-term persistence (STP), long-term persistence (LTP) and large-scale climate anomalies on trend detection were applied to detect trends, and the Pettitt test was used to evaluate change points. The results indicate that there was a mix of increasing and decreasing trends for Canadian AMPs and SMPs. Most regions in Pacific Maritime, central Boreal regions and the Atlantic Maritime showed an increase in AMP, while a decrease in Canadian Prairies and most Boreal regions. More stations showing statistically significant increases than decreases in spring, summer and autumn SMPs were found while there was a statistically significant decrease (increase) in winter SMP over southern (northern) Canada. LTP significantly increased the likelihood of trends detected in AMPs and SMPs. The effects of STP on the trend detection were also evident as shown by the differences in results obtained from the MK tests with and without considering the effect of STP. The effects of large-scale climate anomalies on trends were significant for winter SMPs. More than 1/4 of stations were detected with statistically significant change points in AMPs and SMPs which occurred around 1960-1990. More stations showed significant change points than trends, and winter showed more evident trends and change points in SMPs than other three seasons. Trends and change points detected were field-significant.

  4. Solutions of large-scale electromagnetics problems involving dielectric objects with the parallel multilevel fast multipole algorithm. (United States)

    Ergül, Özgür


    Fast and accurate solutions of large-scale electromagnetics problems involving homogeneous dielectric objects are considered. Problems are formulated with the electric and magnetic current combined-field integral equation and discretized with the Rao-Wilton-Glisson functions. Solutions are performed iteratively by using the multilevel fast multipole algorithm (MLFMA). For the solution of large-scale problems discretized with millions of unknowns, MLFMA is parallelized on distributed-memory architectures using a rigorous technique, namely, the hierarchical partitioning strategy. Efficiency and accuracy of the developed implementation are demonstrated on very large problems involving as many as 100 million unknowns.

  5. Non-stationary analysis of the frequency and intensity of heavy precipitation over Canada and their relations to large-scale climate patterns (United States)

    Tan, Xuezhi; Gan, Thian Yew


    In recent years, because the frequency and severity of floods have increased across Canada, it is important to understand the characteristics of Canadian heavy precipitation. Long-term precipitation data of 463 gauging stations of Canada were analyzed using non-stationary generalized extreme value distribution (GEV), Poisson distribution and generalized Pareto (GP) distribution. Time-varying covariates that represent large-scale climate patterns such as El Niño Southern Oscillation (ENSO), North Atlantic Oscillation (NAO), Pacific decadal oscillation (PDO) and North Pacific Oscillation (NP) were incorporated to parameters of GEV, Poisson and GP distributions. Results show that GEV distributions tend to under-estimate annual maximum daily precipitation (AMP) of western and eastern coastal regions of Canada, compared to GP distributions. Poisson regressions show that temporal clusters of heavy precipitation events in Canada are related to large-scale climate patterns. By modeling AMP time series with non-stationary GEV and heavy precipitation with non-stationary GP distributions, it is evident that AMP and heavy precipitation of Canada show strong non-stationarities (abrupt and slowly varying changes) likely because of the influence of large-scale climate patterns. AMP in southwestern coastal regions, southern Canadian Prairies and the Great Lakes tend to be higher in El Niño than in La Niña years, while AMP of other regions of Canada tends to be lower in El Niño than in La Niña years. The influence of ENSO on heavy precipitation was spatially consistent but stronger than on AMP. The effect of PDO, NAO and NP on extreme precipitation is also statistically significant at some stations across Canada.

  6. Involvement of herbal medicine as a cause of mesenteric phlebosclerosis: results from a large-scale nationwide survey. (United States)

    Shimizu, Seiji; Kobayashi, Taku; Tomioka, Hideo; Ohtsu, Kensei; Matsui, Toshiyuki; Hibi, Toshifumi


    Mesenteric phlebosclerosis (MP) is a rare disease characterized by venous calcification extending from the colonic wall to the mesentery, with chronic ischemic changes from venous return impairment in the intestine. It is an idiopathic disease, but increasing attention has been paid to the potential involvement of herbal medicine, or Kampo, in its etiology. Until now, there were scattered case reports, but no large-scale studies have been conducted to unravel the clinical characteristics and etiology of the disease. A nationwide survey was conducted using questionnaires to assess possible etiology (particularly the involvement of herbal medicine), clinical manifestations, disease course, and treatment of MP. Data from 222 patients were collected. Among the 169 patients (76.1 %), whose history of herbal medicine was obtained, 147 (87.0 %) used herbal medicines. The use of herbal medicines containing sanshishi (gardenia fruit, Gardenia jasminoides Ellis) was reported in 119 out of 147 patients (81.0 %). Therefore, the use of herbal medicine containing sanshishi was confirmed in 70.4 % of 169 patients whose history of herbal medicine was obtained. The duration of sanshishi use ranged from 3 to 51 years (mean 13.6 years). Patients who discontinued sanshishi showed a better outcome compared with those who continued it. The use of herbal medicine containing sanshishi is associated with the etiology of MP. Although it may not be the causative factor, it is necessary for gastroenterologists to be aware of the potential risk of herbal medicine containing sanshishi for the development of MP.

  7. Large-Scale Deletions and SMADIP1 Truncating Mutations in Syndromic Hirschsprung Disease with Involvement of Midline Structures (United States)

    Amiel, Jeanne; Espinosa-Parrilla, Yolanda; Steffann, Julie; Gosset, Philippe; Pelet, Anna; Prieur, Marguerite; Boute, Odile; Choiset, Agnès; Lacombe, Didier; Philip, Nicole; Le Merrer, Martine; Tanaka, Hajime; Till, Marianne; Touraine, Renaud; Toutain, Annick; Vekemans, Michel; Munnich, Arnold; Lyonnet, Stanislas


    Hirschsprung disease (HSCR) is a common malformation of neural-crest–derived enteric neurons that is frequently associated with other congenital abnormalities. The SMADIP1 gene recently has been recognized as disease causing in some patients with 2q22 chromosomal rearrangement, resulting in syndromic HSCR with mental retardation, with microcephaly, and with facial dysmorphism. We screened 19 patients with HSCR and mental retardation and eventually identified large-scale SMADIP1 deletions or truncating mutations in 8 of 19 patients. These results allow further delineation of the spectrum of malformations ascribed to SMADIP1 haploinsufficiency, which includes frequent features such as hypospadias and agenesis of the corpus callosum. Thus, SMADIP1, which encodes a transcriptional corepressor of Smad target genes, may play a role not only in the patterning of neural-crest–derived cells and of CNS but also in the development of midline structures in humans. PMID:11595972

  8. Evaluating HapMap SNP data transferability in a large-scale genotyping project involving 175 cancer-associated genes. (United States)

    Ribas, Gloria; González-Neira, Anna; Salas, Antonio; Milne, Roger L; Vega, Ana; Carracedo, Begoña; González, Emilio; Barroso, Eva; Fernández, Lara P; Yankilevich, Patricio; Robledo, Mercedes; Carracedo, Angel; Benítez, Javier


    One of the many potential uses of the HapMap project is its application to the investigation of complex disease aetiology among a wide range of populations. This study aims to assess the transferability of HapMap SNP data to the Spanish population in the context of cancer research. We have carried out a genotyping study in Spanish subjects involving 175 candidate cancer genes using an indirect gene-based approach and compared results with those for HapMap CEU subjects. Allele frequencies were very consistent between the two samples, with a high positive correlation (R) of 0.91 (PHapMap CEU data using pairwise r (2) thresholds of 0.8 and 0.5 was assessed by applying these to the Spanish and current HapMap data for 66 genes. In general, the HapMap tagSNPs performed very well. Our results show generally high concordance with HapMap data in allele frequencies and haplotype distributions and confirm the applicability of HapMap SNP data to the study of complex diseases among the Spanish population.

  9. Transcriptomic and proteomic responses of Serratia marcescens to spaceflight conditions involve large-scale changes in metabolic pathways (United States)

    Wang, Yajuan; Yuan, Yanting; Liu, Jinwen; Su, Longxiang; Chang, De; Guo, Yinghua; Chen, Zhenhong; Fang, Xiangqun; Wang, Junfeng; Li, Tianzhi; Zhou, Lisha; Fang, Chengxiang; Yang, Ruifu; Liu, Changting


    The microgravity environment of spaceflight expeditions has been associated with altered microbial responses. This study explores the characterization of Serratia marcescensis grown in a spaceflight environment at the phenotypic, transcriptomic and proteomic levels. From November 1, 2011 to November 17, 2011, a strain of S. marcescensis was sent into space for 398 h on the Shenzhou VIII spacecraft, and ground simulation was performed as a control (LCT-SM213). After the flight, two mutant strains (LCT-SM166 and LCT-SM262) were selected for further analysis. Although no changes in the morphology, post-culture growth kinetics, hemolysis or antibiotic sensitivity were observed, the two mutant strains exhibited significant changes in their metabolic profiles after exposure to spaceflight. Enrichment analysis of the transcriptome showed that the differentially expressed genes of the two spaceflight strains and the ground control strain mainly included those involved in metabolism and degradation. The proteome revealed that changes at the protein level were also associated with metabolic functions, such as glycolysis/gluconeogenesis, pyruvate metabolism, arginine and proline metabolism and the degradation of valine, leucine and isoleucine. In summary S. marcescens showed alterations primarily in genes and proteins that were associated with metabolism under spaceflight conditions, which gave us valuable clues for future research.

  10. Conference on Large Scale Optimization

    CERN Document Server

    Hearn, D; Pardalos, P


    On February 15-17, 1993, a conference on Large Scale Optimization, hosted by the Center for Applied Optimization, was held at the University of Florida. The con­ ference was supported by the National Science Foundation, the U. S. Army Research Office, and the University of Florida, with endorsements from SIAM, MPS, ORSA and IMACS. Forty one invited speakers presented papers on mathematical program­ ming and optimal control topics with an emphasis on algorithm development, real world applications and numerical results. Participants from Canada, Japan, Sweden, The Netherlands, Germany, Belgium, Greece, and Denmark gave the meeting an important international component. At­ tendees also included representatives from IBM, American Airlines, US Air, United Parcel Serice, AT & T Bell Labs, Thinking Machines, Army High Performance Com­ puting Research Center, and Argonne National Laboratory. In addition, the NSF sponsored attendance of thirteen graduate students from universities in the United States and abro...

  11. Large Scale Solar Heating

    DEFF Research Database (Denmark)

    Heller, Alfred


    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out......). Simulation programs are proposed as control supporting tool for daily operation and performance prediction of central solar heating plants. Finaly the CSHP technolgy is put into persepctive with respect to alternatives and a short discussion on the barries and breakthrough of the technology are given....... model is designed and validated on the Marstal case. Applying the Danish Reference Year, a design tool is presented. The simulation tool is used for proposals for application of alternative designs, including high-performance solar collector types (trough solar collectors, vaccum pipe collectors...

  12. Large-scale structure

    CERN Document Server

    White, S D M


    Abstract. Recent observational surveys have made substantial progress in quantifying the structure of the Universe on large scales. Galaxy density and galaxy velocity fields show deviations from the predictions of a homogeneous and isotropic world model on scales approaching one percent of the current hori— zon scale. A comparison of the amplitudes in density and in velocity provides the first direct dynamical evidence in favour of a high mean density similar to that required for closure. The fluctuations observed on these scales have the amplitude predicted by the standard Cold Dark Matter (CDM) model when this model is normalised to agree with the microwave background fluc- tuations measured on much larger scales by the COBE satellite. However, a CDM model with this amplitude appears inconsistent with observational data on smaller scales. In addition it predicts a scale dependence of fluctua— tion amplitude which disagrees with that observed for galaxies in the APM survey of two million faint galaxi...

  13. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)


    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  14. An Efficient Large-Scale Retroviral Transduction Method Involving Preloading the Vector into a RetroNectin-Coated Bag with Low-Temperature Shaking (United States)

    Dodo, Katsuyuki; Chono, Hideto; Saito, Naoki; Tanaka, Yoshinori; Tahara, Kenichi; Nukaya, Ikuei; Mineno, Junichi


    In retroviral vector-mediated gene transfer, transduction efficiency can be hampered by inhibitory molecules derived from the culture fluid of virus producer cell lines. To remove these inhibitory molecules to enable better gene transduction, we had previously developed a transduction method using a fibronectin fragment-coated vessel (i.e., the RetroNectin-bound virus transduction method). In the present study, we developed a method that combined RetroNectin-bound virus transduction with low-temperature shaking and applied this method in manufacturing autologous retroviral-engineered T cells for adoptive transfer gene therapy in a large-scale closed system. Retroviral vector was preloaded into a RetroNectin-coated bag and incubated at 4°C for 16 h on a reciprocating shaker at 50 rounds per minute. After the supernatant was removed, activated T cells were added to the bag. The bag transduction method has the advantage of increasing transduction efficiency, as simply flipping over the bag during gene transduction facilitates more efficient utilization of the retroviral vector adsorbed on the top and bottom surfaces of the bag. Finally, we performed validation runs of endoribonuclease MazF-modified CD4+ T cell manufacturing for HIV-1 gene therapy and T cell receptor-modified T cell manufacturing for MAGE-A4 antigen-expressing cancer gene therapy and achieved over 200-fold (≥1010) and 100-fold (≥5×109) expansion, respectively. In conclusion, we demonstrated that the large-scale closed transduction system is highly efficient for retroviral vector-based T cell manufacturing for adoptive transfer gene therapy, and this technology is expected to be amenable to automation and improve current clinical gene therapy protocols. PMID:24454964

  15. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris


    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  16. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.


    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data

  17. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics


    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  18. Large-Scale Selective Functionalization of Alkanes. (United States)

    Goldberg, Karen I; Goldman, Alan S


    Great progress has been made in the past several decades concerning C-H bond functionalization. But despite many significant advances, a commercially viable large-scale process for selective alkane functionalization remains an unreached goal. Such conversions will require highly active, selective, and long-lived catalysts. In addition, essentially complete atom-economy will be required. Thus, any reagents used in transforming the alkanes must be almost free (e.g., O2, H2O, N2), or they should be incorporated into the desired large-scale product. Any side-products should be completely benign or have value as fuels (e.g., H2 or other alkanes). Progress and promising leads toward the development of such systems involving primarily molecular transition metal catalysts are described.

  19. Japanese large-scale interferometers

    CERN Document Server

    Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K


    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.

  20. Large scale biomimetic membrane arrays

    DEFF Research Database (Denmark)

    Hansen, Jesper Søndergaard; Perry, Mark; Vogel, Jörg


    To establish planar biomimetic membranes across large scale partition aperture arrays, we created a disposable single-use horizontal chamber design that supports combined optical-electrical measurements. Functional lipid bilayers could easily and efficiently be established across CO2 laser micro...... peptides and proteins. Next, we tested the scalability of the biomimetic membrane design by establishing lipid bilayers in rectangular 24 x 24 and hexagonal 24 x 27 aperture arrays, respectively. The results presented show that the design is suitable for further developments of sensitive biosensor assays...

  1. Large scale nanopatterning of graphene

    Energy Technology Data Exchange (ETDEWEB)

    Neumann, P.L., E-mail: [Research Institute for Technical Physics and Materials Science (MFA) of HAS, Korean-Hungarian Joint Laboratory for Nanosciences, Budapest H-1525, P.O. Box 49 (Hungary); Budapest University of Technology and Economics (BUTE), Department of Physics, Solid State Physics Laboratory, Budapest H-1521, P.O. Box 91 (Hungary); Tovari, E.; Csonka, S. [Budapest University of Technology and Economics (BUTE), Department of Physics, Solid State Physics Laboratory, Budapest H-1521, P.O. Box 91 (Hungary); Kamaras, K. [Research Institute for Solid State Physics and Optics of HAS, Budapest H-1525, P.O. Box 49 (Hungary); Horvath, Z.E.; Biro, L.P. [Research Institute for Technical Physics and Materials Science (MFA) of HAS, Korean-Hungarian Joint Laboratory for Nanosciences, Budapest H-1525, P.O. Box 49 (Hungary)


    Recently, we have shown that the shaping of atomically perfect zig-zag oriented edges can be performed by exploiting the orientation dependent oxidation in graphene, by annealing the samples in inert atmosphere, where the oxygen source is the SiO{sub 2} substrate itself. In the present study, we showed that the large scale patterning of graphene using a conventional lithography technique can be combined with the control of crystallographic orientation and edge shaping. We applied electron beam lithography (EBL) followed by low energy O{sup +}/Ar{sup +} plasma etching for patterning mechanically exfoliated graphene flakes. As AFM imaging of the samples revealed, the controlled oxidation transformed the originally circular holes to polygonal shape with edges parallel with the zig-zag direction, showing the possibility of atomically precise, large area patterning of graphene.

  2. Large-Scale Sequence Comparison. (United States)

    Lal, Devi; Verma, Mansi


    There are millions of sequences deposited in genomic databases, and it is an important task to categorize them according to their structural and functional roles. Sequence comparison is a prerequisite for proper categorization of both DNA and protein sequences, and helps in assigning a putative or hypothetical structure and function to a given sequence. There are various methods available for comparing sequences, alignment being first and foremost for sequences with a small number of base pairs as well as for large-scale genome comparison. Various tools are available for performing pairwise large sequence comparison. The best known tools either perform global alignment or generate local alignments between the two sequences. In this chapter we first provide basic information regarding sequence comparison. This is followed by the description of the PAM and BLOSUM matrices that form the basis of sequence comparison. We also give a practical overview of currently available methods such as BLAST and FASTA, followed by a description and overview of tools available for genome comparison including LAGAN, MumMER, BLASTZ, and AVID.

  3. Large-scale pool fires

    Directory of Open Access Journals (Sweden)

    Steinhaus Thomas


    Full Text Available A review of research into the burning behavior of large pool fires and fuel spill fires is presented. The features which distinguish such fires from smaller pool fires are mainly associated with the fire dynamics at low source Froude numbers and the radiative interaction with the fire source. In hydrocarbon fires, higher soot levels at increased diameters result in radiation blockage effects around the perimeter of large fire plumes; this yields lower emissive powers and a drastic reduction in the radiative loss fraction; whilst there are simplifying factors with these phenomena, arising from the fact that soot yield can saturate, there are other complications deriving from the intermittency of the behavior, with luminous regions of efficient combustion appearing randomly in the outer surface of the fire according the turbulent fluctuations in the fire plume. Knowledge of the fluid flow instabilities, which lead to the formation of large eddies, is also key to understanding the behavior of large-scale fires. Here modeling tools can be effectively exploited in order to investigate the fluid flow phenomena, including RANS- and LES-based computational fluid dynamics codes. The latter are well-suited to representation of the turbulent motions, but a number of challenges remain with their practical application. Massively-parallel computational resources are likely to be necessary in order to be able to adequately address the complex coupled phenomena to the level of detail that is necessary.

  4. Large-scale linear programs in planning and prediction. (United States)


    Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...

  5. Large-scale galaxy bias (United States)

    Jeong, Donghui; Desjacques, Vincent; Schmidt, Fabian


    Here, we briefly introduce the key results of the recent review (arXiv:1611.09787), whose abstract is as following. This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy (or halo) statistics. We then review the excursion set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  6. Handbook of Large-Scale Random Networks

    CERN Document Server

    Bollobas, Bela; Miklos, Dezso


    Covers various aspects of large-scale networks, including mathematical foundations and rigorous results of random graph theory, modeling and computational aspects of large-scale networks, as well as areas in physics, biology, neuroscience, sociology and technical areas


    National Research Council Canada - National Science Library

    Don A Klinger; Rebecca Luce-Kapler


      With the implementation of the Ontario Secondary School Literacy Test (OSSLT) in 2002, Ontario became the first province in Canada requiring successful completion of a large-scale highstakes literacy test for high school graduation...

  8. Agri-Environmental Resource Management by Large-Scale Collective Action: Determining KEY Success Factors (United States)

    Uetake, Tetsuya


    Purpose: Large-scale collective action is necessary when managing agricultural natural resources such as biodiversity and water quality. This paper determines the key factors to the success of such action. Design/Methodology/Approach: This paper analyses four large-scale collective actions used to manage agri-environmental resources in Canada and…

  9. Introducing Large-Scale Innovation in Schools (United States)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.


    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  10. Large-Scale Reform Comes of Age (United States)

    Fullan, Michael


    This article reviews the history of large-scale education reform and makes the case that large-scale or whole system reform policies and strategies are becoming increasingly evident. The review briefly addresses the pre 1997 period concluding that while the pressure for reform was mounting that there were very few examples of deliberate or…

  11. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  12. Sample-Starved Large Scale Network Analysis (United States)


    Applications to materials science 2.1 Foundational principles for large scale inference on structure of covariance We developed general principles for...concise but accessible format. These principles are applicable to large-scale complex network applications arising genomics , connectomics, eco-informatics...available to estimate or detect patterns in the matrix. 15. SUBJECT TERMS multivariate dependency structure multivariate spatio-temporal prediction

  13. Network robustness under large-scale attacks

    CERN Document Server

    Zhou, Qing; Liu, Ruifang; Cui, Shuguang


    Network Robustness under Large-Scale Attacks provides the analysis of network robustness under attacks, with a focus on large-scale correlated physical attacks. The book begins with a thorough overview of the latest research and techniques to analyze the network responses to different types of attacks over various network topologies and connection models. It then introduces a new large-scale physical attack model coined as area attack, under which a new network robustness measure is introduced and applied to study the network responses. With this book, readers will learn the necessary tools to evaluate how a complex network responds to random and possibly correlated attacks.

  14. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid


    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  15. SDI Large-Scale System Technology Study

    National Research Council Canada - National Science Library


    .... This coordination is addressed by the Battle Management function. The algorithms and technologies required to support Battle Management are the subject of the SDC Large Scale Systems Technology Study...

  16. Large Scale Metal Additive Techniques Review

    Energy Technology Data Exchange (ETDEWEB)

    Nycz, Andrzej [ORNL; Adediran, Adeola I [ORNL; Noakes, Mark W [ORNL; Love, Lonnie J [ORNL


    In recent years additive manufacturing made long strides toward becoming a main stream production technology. Particularly strong progress has been made in large-scale polymer deposition. However, large scale metal additive has not yet reached parity with large scale polymer. This paper is a review study of the metal additive techniques in the context of building large structures. Current commercial devices are capable of printing metal parts on the order of several cubic feet compared to hundreds of cubic feet for the polymer side. In order to follow the polymer progress path several factors are considered: potential to scale, economy, environment friendliness, material properties, feedstock availability, robustness of the process, quality and accuracy, potential for defects, and post processing as well as potential applications. This paper focuses on current state of art of large scale metal additive technology with a focus on expanding the geometric limits.

  17. Large-scale regions of antimatter

    Energy Technology Data Exchange (ETDEWEB)

    Grobov, A. V., E-mail:; Rubin, S. G., E-mail: [National Research Nuclear University MEPhI (Russian Federation)


    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  18. Newton Methods for Large Scale Problems in Machine Learning (United States)

    Hansen, Samantha Leigh


    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  19. Resolute large scale mining company contribution to health services of

    African Journals Online (AJOL)

    Introduction: In 1995 Tanzanian Government reformed the mining industry and the new policy allowed an involvement of multinational companies but the communities living near new large scale gold mines were expected to benefit from the industry in terms of socio economic, health, education, employment, safe drinking ...

  20. CANADA

    International Development Research Centre (IDRC) Digital Library (Canada)

    Hakan Mustafa

    . AAAA. Numéro du fournisseur. Protégé B*. (une fois rempli). RENSEIGNEMENTS GÉNÉRAUX, FISCAUX ET BANCAIRES DU FOURNISSEUR – CANADA. Section 1 : RENSEIGNEMENTS GÉNÉRAUX. Nom du particulier (nom, prénom) ou ...

  1. Large scale structure statistics: Finite volume effects (United States)

    Colombi, S.; Bouchet, F. R.; Schaeffer, R.


    We study finite volume effects on the count probability distribution function PN(l) and the averaged Q-body correlations Xi-barQ (2 less than or = Q less than or equal 5). These statistics are computed for cubic cells, of size l. We use as an example the case of the matter distribution of a cold dark matter (CDM) universe involving approximately 3 x 105 particles. The main effect of the finiteness of the sampled volume is to induce an abrupt cut-off on the function PN(l) at large N. This clear signature makes an analysis of the consequences easy, and one can envisage a correction procedure. As a matter of fact, we demonstrate how an unfair sample can strongly affect the estimates of the functions Xi-barQ for Q greater than or = 3 (and decrease the measured zero of the two-body correlation function). We propose a method to correct for this are fact, or at least to evaluate the corresponding errors. We show that the correlations are systematically underestimated by direct measurements. We find that, once corrected, the statistical properties of the CDM universe appear compatible with the scaling relation SQ identically equals Xi-bar2 exp Q-1 = constant with respect to scale, in the non-linear regime; it was not the case with direct measurments. However, we note a deviation from scaling at scales close to the correlation length. It is probably due to the transition between the highly non-linear regime and the weakly correlated regime, where the functions SQ also seem to present a plateau. We apply the same procedure to simulations with hot dark matter (HDM) and white noise initial conditions, with similar results. Our method thus provides the first accurate measurement of the normalized skewness, S3, and the normalized kurtosis, S4, for three typical models of large scale structure formation in an expanding universe.

  2. Accelerating sustainability in large-scale facilities

    CERN Multimedia

    Marina Giampietro


    Scientific research centres and large-scale facilities are intrinsically energy intensive, but how can big science improve its energy management and eventually contribute to the environmental cause with new cleantech? CERN’s commitment to providing tangible answers to these questions was sealed in the first workshop on energy management for large scale scientific infrastructures held in Lund, Sweden, on the 13-14 October.   Participants at the energy management for large scale scientific infrastructures workshop. The workshop, co-organised with the European Spallation Source (ESS) and  the European Association of National Research Facilities (ERF), tackled a recognised need for addressing energy issues in relation with science and technology policies. It brought together more than 150 representatives of Research Infrastrutures (RIs) and energy experts from Europe and North America. “Without compromising our scientific projects, we can ...

  3. Large-scale approaches for glycobiology


    Christopher T. Campbell; Yarema, Kevin J.


    Glycosylation, the attachment of carbohydrates to proteins and lipids, influences many biological processes. Despite detailed characterization of the cellular components that carry out glycosylation, a complete picture of a cell's glycoconjugates remains elusive because of the challenges inherent in characterizing complex carbohydrates. This article reviews large-scale techniques for accelerating progress in glycobiology.

  4. Inflation, large scale structure and particle physics

    Indian Academy of Sciences (India)

    We review experimental and theoretical developments in inflation and its application to structure formation, including the curvation idea. We then discuss a particle physics model of supersymmetric hybrid inflation at the intermediate scale in which the Higgs scalar field is responsible for large scale structure, show how such ...

  5. Management of large-scale technology (United States)

    Levine, A.


    Two major themes are addressed in this assessment of the management of large-scale NASA programs: (1) how a high technology agency was a decade marked by a rapid expansion of funds and manpower in the first half and almost as rapid contraction in the second; and (2) how NASA combined central planning and control with decentralized project execution.

  6. A large-scale biomass bulk terminal

    NARCIS (Netherlands)

    Wu, M.R.


    This research explores the possibility of a large-scale bulk terminal in West Europe dedicated to handle solid and liquid biomass materials. Various issues regarding the conceptual design of such a terminal have been investigated and demonstrated in this research: the potential biomass materials

  7. Code generation for large scale applications

    NARCIS (Netherlands)

    Mark, Paul Johannes van der


    Efficient execution of large-scale application codes is a primary requirement in many cases. High efficiency can only be achieved by utilizing architecture-independent efficient algorithms and exploiting specific architecture-dependent characteristics of a given computer architecture. However,

  8. Large-scale perspective as a challenge

    NARCIS (Netherlands)

    Plomp, M.G.A.


    1. Scale forms a challenge for chain researchers: when exactly is something ‘large-scale’? What are the underlying factors (e.g. number of parties, data, objects in the chain, complexity) that determine this? It appears to be a continuum between small- and large-scale, where positioning on that

  9. Ethics of large-scale change

    DEFF Research Database (Denmark)

    Arler, Finn


    , which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, the neoclassical economists' approach, and finally the so-called Concentric Circle Theories approach...

  10. Topological Routing in Large-Scale Networks

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Knudsen, Thomas Phillip; Madsen, Ole Brun


    Topological Routing to large-scale networks are discussed. Hierarchical extensions are presented along with schemes for shortest path routing, fault handling and path restoration. Further reserach in the area is discussed and perspectives on the prerequisites for practical deployment of Topological Routing...

  11. Topological Routing in Large-Scale Networks

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Knudsen, Thomas Phillip; Madsen, Ole Brun

    Topological Routing to large-scale networks are discussed. Hierarchical extensions are presented along with schemes for shortest path routing, fault handling and path restoration. Further reserach in the area is discussed and perspectives on the prerequisites for practical deployment of Topological Routing...

  12. Editorial: Bioprocess beyond the large scale production. (United States)

    Koo, Yoon-Mo; Srinophakun, Penjit


    Advances in bioprocess engineering continue to step forward in its own field of large scale production in mid- and down-stream processes of biotechnology. Some of the recent researches are shown in this AFOB (Asian Federation of Biotechnology) Special issue of Advances in Bioprocess Engineering. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Large-Scale Analysis of Art Proportions

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer


    While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square...

  14. Large-scale Heterogeneous Network Data Analysis (United States)


    Information Diffusion over Crowds with Social Network.” ACM SIGGRAPH 2012. (poster)  Wan-Yu Lin, Nanyun Peng, Chun-Chao Yen, Shou-De Lin. “Online Plagiarism ...Abstract: Large-scale network is a powerful data structure allowing the depiction of relationship information between entities. Recent...we propose an unsupervised tensor-based mechanism, considering higher-order relational information , to model the complex semantics of nodes. The

  15. A large-scale biomass bulk terminal


    Wu, M.R.


    This research explores the possibility of a large-scale bulk terminal in West Europe dedicated to handle solid and liquid biomass materials. Various issues regarding the conceptual design of such a terminal have been investigated and demonstrated in this research: the potential biomass materials that will be the major international trade flows in the future, the characteristics of these potential biomass materials, the interaction between the material properties and terminal equipment, the pe...

  16. Large Scale Structure of the Universe (United States)

    Kaplinghat, Manoj


    These notes are based on 4 lectures given at Theoretical Advanced Study Institute in 2009 on the large scale structure of the universe. They provide a pedagogical introduction to the temporal evolution of linear density perturbations in the universe and a discussion of how density perturbations on small scales depend on the particle properties of dark matter. The notes assume the reader is familiar with the concepts and mathematics required to describe isotropic and homogeneous cosmology.

  17. Large-scale neuromorphic computing systems (United States)

    Furber, Steve


    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  18. Large-scale film structures in space (United States)

    Simon, Kirill

    Up-to-date space technology calls for not only taking account of, but also employment of, specific attributes of the outer space environment such as weightlessness, centrifugal forces, hard vacuum, powerful solar radiation. These specific characteristics of outer space allow the use of various structures in space whose development and operation is impossible and inexpedient on Earth. Currently, interest in large-scale space structures is growing; there are various projects on such multi-body space structures and experiments are being conducted for their development. Such designs are represented by spacecraft with solar sails, orbiting solar reflectors, solar energy concentrators, low frequency antennas and others. This paper examines a large-scale flexible space structure made from thin reflective film used as the working surface of the sunlight reflector or the sailcraft. Specifically, this paper deals with techniques of modeling large-scale space structure attitude motion, numerical calculation of vibrations which occur in the system after a spatial slew is performed, as well as optimal trajectory computations. Various methods of the film structure attitude control and stabilization, including optimal slewing programs, are discussed.

  19. Economically viable large-scale hydrogen liquefaction (United States)

    Cardella, U.; Decker, L.; Klein, H.


    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  20. Growth Limits in Large Scale Networks

    DEFF Research Database (Denmark)

    Knudsen, Thomas Phillip

    limitations. The rising complexity of network management with the convergence of communications platforms is shown as problematic for both automatic management feasibility and for manpower resource management. In the fourth step the scope is extended to include the present society with the DDN project as its...... main focus. Here the general perception of the nature and role in society of large scale networks as a fundamental infrastructure is analysed. This analysis focuses on the effects of the technical DDN projects and on the perception of network infrastructure as expressed by key decision makers....... A summary of the most pressing growth limits for the coming three decades is given....

  1. Colloquium: Large scale simulations on GPU clusters (United States)

    Bernaschi, Massimo; Bisson, Mauro; Fatica, Massimiliano


    Graphics processing units (GPU) are currently used as a cost-effective platform for computer simulations and big-data processing. Large scale applications require that multiple GPUs work together but the efficiency obtained with cluster of GPUs is, at times, sub-optimal because the GPU features are not exploited at their best. We describe how it is possible to achieve an excellent efficiency for applications in statistical mechanics, particle dynamics and networks analysis by using suitable memory access patterns and mechanisms like CUDA streams, profiling tools, etc. Similar concepts and techniques may be applied also to other problems like the solution of Partial Differential Equations.

  2. Stabilization Algorithms for Large-Scale Problems

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg


    -curve. This heuristic is implemented as a part of a larger algorithm which is developed in collaboration with G. Rodriguez and P. C. Hansen. Last, but not least, a large part of the project has, in different ways, revolved around the object-oriented Matlab toolbox MOORe Tools developed by PhD Michael Jacobsen. New......The focus of the project is on stabilization of large-scale inverse problems where structured models and iterative algorithms are necessary for computing approximate solutions. For this purpose, we study various iterative Krylov methods and their abilities to produce regularized solutions. Some...

  3. Large-scale carbon nanotube synthesis. (United States)

    MacKenzie, Kiern J; Dunens, Oscar M; See, Chee H; Harris, Andrew T


    Carbon nanotubes (CNTs) are a form of crystalline carbon with extraordinary chemical, physical, electrical and mechanical properties, making them potentially valuable in a broad range of applications. These properties have resulted in an unprecedented level of interest in the development of techniques to manufacture CNTs, and consequently a raft of competing patents have been issued, with universities and commercial entities alike looking to obtain patent protection for their inventions. In this paper we review relevant aspects of international patent law, summarize CNT definitions and discuss patent irregularities, and discuss the implications of the widening gap between nanotechnology practice and the underlying patent law. This is followed by a review of the chemical vapour deposition technique of CNT synthesis, in particular using a fluidised bed, identified as the most promising method to date for the large-scale, low cost production of CNTs. We further examine the carbon nanotube patent space, focusing primarily on patents for CNTs produced via CVD and FBCVD techniques. This patent space is both convoluted and uncertain, and it appears likely that some form of litigation will ensue in future to ultimately determine intellectual property ownership in various regions. We also discuss the likely effect of this 'patent thicket' on the commercialisation of large-scale CNT synthesis processes.

  4. Large-scale Globally Propagating Coronal Waves

    Directory of Open Access Journals (Sweden)

    Alexander Warmuth


    Full Text Available Large-scale, globally propagating wave-like disturbances have been observed in the solar chromosphere and by inference in the corona since the 1960s. However, detailed analysis of these phenomena has only been conducted since the late 1990s. This was prompted by the availability of high-cadence coronal imaging data from numerous spaced-based instruments, which routinely show spectacular globally propagating bright fronts. Coronal waves, as these perturbations are usually referred to, have now been observed in a wide range of spectral channels, yielding a wealth of information. Many findings have supported the “classical” interpretation of the disturbances: fast-mode MHD waves or shocks that are propagating in the solar corona. However, observations that seemed inconsistent with this picture have stimulated the development of alternative models in which “pseudo waves” are generated by magnetic reconfiguration in the framework of an expanding coronal mass ejection. This has resulted in a vigorous debate on the physical nature of these disturbances. This review focuses on demonstrating how the numerous observational findings of the last one and a half decades can be used to constrain our models of large-scale coronal waves, and how a coherent physical understanding of these disturbances is finally emerging.

  5. Internationalization Measures in Large Scale Research Projects (United States)

    Soeding, Emanuel; Smith, Nancy


    Internationalization measures in Large Scale Research Projects Large scale research projects (LSRP) often serve as flagships used by universities or research institutions to demonstrate their performance and capability to stakeholders and other interested parties. As the global competition among universities for the recruitment of the brightest brains has increased, effective internationalization measures have become hot topics for universities and LSRP alike. Nevertheless, most projects and universities are challenged with little experience on how to conduct these measures and make internationalization an cost efficient and useful activity. Furthermore, those undertakings permanently have to be justified with the Project PIs as important, valuable tools to improve the capacity of the project and the research location. There are a variety of measures, suited to support universities in international recruitment. These include e.g. institutional partnerships, research marketing, a welcome culture, support for science mobility and an effective alumni strategy. These activities, although often conducted by different university entities, are interlocked and can be very powerful measures if interfaced in an effective way. On this poster we display a number of internationalization measures for various target groups, identify interfaces between project management, university administration, researchers and international partners to work together, exchange information and improve processes in order to be able to recruit, support and keep the brightest heads to your project.

  6. Large-scale design of supersonic aircraft via collaborative optimization (United States)

    Manning, Valerie Michelle

    The design of supersonic aircraft requires complex analysis in multiple disciplines, posing a challenge for multidisciplinary optimization methods. In this thesis, collaborative optimization, a design architecture developed to solve large-scale multidisciplinary design problems, is applied to the design of supersonic transport concepts. Collaborative optimization, takes advantage of natural disciplinary segmentation to facilitate parallel execution of design tasks. Discipline-specific design optimization proceeds while a coordinating mechanism ensures progress toward an optimum and compatibility between disciplinary designs. Two concepts for supersonic aircraft are investigated: a conventional delta-wing design and a natural laminar flow concept that achieves improved performance by exploiting properties of supersonic flow to delay boundary layer transition. The work involves the development of aerodynamics and structural analyses, and integration within a collaborative optimization framework. Response surface estimation and reduced basis modeling were used to reduce the computational expense of the optimization and to ensure smooth analytic gradients. Both design problems converged successfully. In each problem, the system optimizer minimized aircraft take-off weight with respect to global and disciplinary design variables, subject to aeroelastic and performance constraints. In previous work, the method successfully solved simple and medium fidelity problems. The current work demonstrates collaborative optimization with large-scale designs using industry-standard analyses. The research shows that collaborative optimization is a valuable method for large-scale design, ready for real-world implementation.

  7. Multivariate Clustering of Large-Scale Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Eliassi-Rad, T; Critchlow, T


    Simulations of complex scientific phenomena involve the execution of massively parallel computer programs. These simulation programs generate large-scale data sets over the spatiotemporal space. Modeling such massive data sets is an essential step in helping scientists discover new information from their computer simulations. In this paper, we present a simple but effective multivariate clustering algorithm for large-scale scientific simulation data sets. Our algorithm utilizes the cosine similarity measure to cluster the field variables in a data set. Field variables include all variables except the spatial (x, y, z) and temporal (time) variables. The exclusion of the spatial space is important since 'similar' characteristics could be located (spatially) far from each other. To scale our multivariate clustering algorithm for large-scale data sets, we take advantage of the geometrical properties of the cosine similarity measure. This allows us to reduce the modeling time from O(n{sup 2}) to O(n x g(f(u))), where n is the number of data points, f(u) is a function of the user-defined clustering threshold, and g(f(u)) is the number of data points satisfying the threshold f(u). We show that on average g(f(u)) is much less than n. Finally, even though spatial variables do not play a role in building a cluster, it is desirable to associate each cluster with its correct spatial space. To achieve this, we present a linking algorithm for connecting each cluster to the appropriate nodes of the data set's topology tree (where the spatial information of the data set is stored). Our experimental evaluations on two large-scale simulation data sets illustrate the value of our multivariate clustering and linking algorithms.

  8. Multivariate Clustering of Large-Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Eliassi-Rad, T; Critchlow, T


    Simulations of complex scientific phenomena involve the execution of massively parallel computer programs. These simulation programs generate large-scale data sets over the spatio-temporal space. Modeling such massive data sets is an essential step in helping scientists discover new information from their computer simulations. In this paper, we present a simple but effective multivariate clustering algorithm for large-scale scientific simulation data sets. Our algorithm utilizes the cosine similarity measure to cluster the field variables in a data set. Field variables include all variables except the spatial (x, y, z) and temporal (time) variables. The exclusion of the spatial dimensions is important since ''similar'' characteristics could be located (spatially) far from each other. To scale our multivariate clustering algorithm for large-scale data sets, we take advantage of the geometrical properties of the cosine similarity measure. This allows us to reduce the modeling time from O(n{sup 2}) to O(n x g(f(u))), where n is the number of data points, f(u) is a function of the user-defined clustering threshold, and g(f(u)) is the number of data points satisfying f(u). We show that on average g(f(u)) is much less than n. Finally, even though spatial variables do not play a role in building clusters, it is desirable to associate each cluster with its correct spatial region. To achieve this, we present a linking algorithm for connecting each cluster to the appropriate nodes of the data set's topology tree (where the spatial information of the data set is stored). Our experimental evaluations on two large-scale simulation data sets illustrate the value of our multivariate clustering and linking algorithms.

  9. On the Phenomenology of an Accelerated Large-Scale Universe

    Directory of Open Access Journals (Sweden)

    Martiros Khurshudyan


    Full Text Available In this review paper, several new results towards the explanation of the accelerated expansion of the large-scale universe is discussed. On the other hand, inflation is the early-time accelerated era and the universe is symmetric in the sense of accelerated expansion. The accelerated expansion of is one of the long standing problems in modern cosmology, and physics in general. There are several well defined approaches to solve this problem. One of them is an assumption concerning the existence of dark energy in recent universe. It is believed that dark energy is responsible for antigravity, while dark matter has gravitational nature and is responsible, in general, for structure formation. A different approach is an appropriate modification of general relativity including, for instance, f ( R and f ( T theories of gravity. On the other hand, attempts to build theories of quantum gravity and assumptions about existence of extra dimensions, possible variability of the gravitational constant and the speed of the light (among others, provide interesting modifications of general relativity applicable to problems of modern cosmology, too. In particular, here two groups of cosmological models are discussed. In the first group the problem of the accelerated expansion of large-scale universe is discussed involving a new idea, named the varying ghost dark energy. On the other hand, the second group contains cosmological models addressed to the same problem involving either new parameterizations of the equation of state parameter of dark energy (like varying polytropic gas, or nonlinear interactions between dark energy and dark matter. Moreover, for cosmological models involving varying ghost dark energy, massless particle creation in appropriate radiation dominated universe (when the background dynamics is due to general relativity is demonstrated as well. Exploring the nature of the accelerated expansion of the large-scale universe involving generalized

  10. Large scale water lens for solar concentration. (United States)

    Mondol, A S; Vogel, B; Bastian, G


    Properties of large scale water lenses for solar concentration were investigated. These lenses were built from readily available materials, normal tap water and hyper-elastic linear low density polyethylene foil. Exposed to sunlight, the focal lengths and light intensities in the focal spot were measured and calculated. Their optical properties were modeled with a raytracing software based on the lens shape. We have achieved a good match of experimental and theoretical data by considering wavelength dependent concentration factor, absorption and focal length. The change in light concentration as a function of water volume was examined via the resulting load on the foil and the corresponding change of shape. The latter was extracted from images and modeled by a finite element simulation.

  11. Large-scale sequential quadratic programming algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Eldersveld, S.K.


    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  12. Analysis using large-scale ringing data

    Directory of Open Access Journals (Sweden)

    Baillie, S. R.


    survival and recruitment estimates from the French CES scheme to assess the relative contributions of survival and recruitment to overall population changes. He develops a novel approach to modelling survival rates from such multi–site data by using within–year recaptures to provide a covariate of between–year recapture rates. This provided parsimonious models of variation in recapture probabilities between sites and years. The approach provides promising results for the four species investigated and can potentially be extended to similar data from other CES/MAPS schemes. The final paper by Blandine Doligez, David Thomson and Arie van Noordwijk (Doligez et al., 2004 illustrates how large-scale studies of population dynamics can be important for evaluating the effects of conservation measures. Their study is concerned with the reintroduction of White Stork populations to the Netherlands where a re–introduction programme started in 1969 had resulted in a breeding population of 396 pairs by 2000. They demonstrate the need to consider a wide range of models in order to account for potential age, time, cohort and “trap–happiness” effects. As the data are based on resightings such trap–happiness must reflect some form of heterogeneity in resighting probabilities. Perhaps surprisingly, the provision of supplementary food did not influence survival, but it may havehad an indirect effect via the alteration of migratory behaviour. Spatially explicit modelling of data gathered at many sites inevitably results in starting models with very large numbers of parameters. The problem is often complicated further by having relatively sparse data at each site, even where the total amount of data gathered is very large. Both Julliard (2004 and Doligez et al. (2004 give explicit examples of problems caused by needing to handle very large numbers of parameters and show how they overcame them for their particular data sets. Such problems involve both the choice of appropriate

  13. Foundational perspectives on causality in large-scale brain networks (United States)

    Mannino, Michael; Bressler, Steven L.


    likelihood that a change in the activity of one neuronal population affects the activity in another. We argue that these measures access the inherently probabilistic nature of causal influences in the brain, and are thus better suited for large-scale brain network analysis than are DC-based measures. Our work is consistent with recent advances in the philosophical study of probabilistic causality, which originated from inherent conceptual problems with deterministic regularity theories. It also resonates with concepts of stochasticity that were involved in establishing modern physics. In summary, we argue that probabilistic causality is a conceptually appropriate foundation for describing neural causality in the brain.

  14. Large Scale EOF Analysis of Climate Data (United States)

    Prabhat, M.; Gittens, A.; Kashinath, K.; Cavanaugh, N. R.; Mahoney, M.


    We present a distributed approach towards extracting EOFs from 3D climate data. We implement the method in Apache Spark, and process multi-TB sized datasets on O(1000-10,000) cores. We apply this method to latitude-weighted ocean temperature data from CSFR, a 2.2 terabyte-sized data set comprising ocean and subsurface reanalysis measurements collected at 41 levels in the ocean, at 6 hour intervals over 31 years. We extract the first 100 EOFs of this full data set and compare to the EOFs computed simply on the surface temperature field. Our analyses provide evidence of Kelvin and Rossy waves and components of large-scale modes of oscillation including the ENSO and PDO that are not visible in the usual SST EOFs. Further, they provide information on the the most influential parts of the ocean, such as the thermocline, that exist below the surface. Work is ongoing to understand the factors determining the depth-varying spatial patterns observed in the EOFs. We will experiment with weighting schemes to appropriately account for the differing depths of the observations. We also plan to apply the same distributed approach to analysis of analysis of 3D atmospheric climatic data sets, including multiple variables. Because the atmosphere changes on a quicker time-scale than the ocean, we expect that the results will demonstrate an even greater advantage to computing 3D EOFs in lieu of 2D EOFs.

  15. Large-scale carbon fiber tests (United States)

    Pride, R. A.


    A realistic release of carbon fibers was established by burning a minimum of 45 kg of carbon fiber composite aircraft structural components in each of five large scale, outdoor aviation jet fuel fire tests. This release was quantified by several independent assessments with various instruments developed specifically for these tests. The most likely values for the mass of single carbon fibers released ranged from 0.2 percent of the initial mass of carbon fiber for the source tests (zero wind velocity) to a maximum of 0.6 percent of the initial carbon fiber mass for dissemination tests (5 to 6 m/s wind velocity). Mean fiber lengths for fibers greater than 1 mm in length ranged from 2.5 to 3.5 mm. Mean diameters ranged from 3.6 to 5.3 micrometers which was indicative of significant oxidation. Footprints of downwind dissemination of the fire released fibers were measured to 19.1 km from the fire.

  16. Food appropriation through large scale land acquisitions (United States)

    Rulli, Maria Cristina; D'Odorico, Paolo


    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300-550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190-370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations.

  17. Large scale digital atlases in neuroscience (United States)

    Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.


    Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.

  18. Large-scale PV grid integration

    Energy Technology Data Exchange (ETDEWEB)

    Martins Souza, Hellen; Jose do Carmo, Marlon; Rocha de Oliveira, Angelo [CEFET-MG, Leopoldina (Brazil). Dept. of Control and Automation; Willer de Oliveira, Leonard [Universidade Federal de Juiz de Fora (UFJF) (Brazil). Power Systems; Ribeiro, Paulo Fernando [Technische Univ. Eindhoven (Netherlands). Electrical Energy Systems


    This paper aims to review the development of solar energy as a renewable source of electricity, which is vital to the development of a more sustainable world, pointing out challenges and technology trends. First, there will be a review of the development of photovoltaic panels, focusing on countries where the technology is in its most advanced stage, such as Germany, Netherlands, the United States and Spain as well as showing trends of this type of power generation source. Important aspects such as efficiency and production costs of photovoltaic power plants will be covered. In addition, the integration of this generation sources to the electric power system will be considered concerning their impact on system parameters as power quality, stability and protection. The existing rules and interconnection standards for large-scale integration / implementation of photovoltaic (PV) generation are reviewed and discussed. Finally, a special application case of PV in the country of Brazil is briefly mentioned considering its potential for generation and implementation in the upcoming years. (orig.)

  19. Analysis and Management of Large-Scale Activities Based on Interface (United States)

    Yang, Shaofan; Ji, Jingwei; Lu, Ligang; Wang, Zhiyi

    Based on the concepts of system safety engineering, life-cycle and interface that comes from American system safety standard MIL-STD-882E, and apply them to the process of risk analysis and management of large-scale activities. Identify the involved personnel, departments, funds and other contents throughout the life cycle of large-scale activities. Recognize and classify the ultimate risk sources of people, objects and environment of large-scale activities from the perspective of interface. Put forward the accident cause analysis model according to the previous large-scale activities' accidents and combine with the analysis of the risk source interface. Analyze the risks of each interface and summary various types of risks the large-scale activities faced. Come up with the risk management consciousness, policies and regulations, risk control and supervision departments improvement ideas.

  20. Sensitivity technologies for large scale simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias (Rice University, Houston, TX); Wilcox, Lucas C. (Brown University, Providence, RI); Hill, Judith C. (Carnegie Mellon University, Pittsburgh, PA); Ghattas, Omar (Carnegie Mellon University, Pittsburgh, PA); Berggren, Martin Olof (University of UppSala, Sweden); Akcelik, Volkan (Carnegie Mellon University, Pittsburgh, PA); Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard


    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  1. Brief Mental Training Reorganizes Large-Scale Brain Networks. (United States)

    Tang, Yi-Yuan; Tang, Yan; Tang, Rongxiang; Lewis-Peacock, Jarrod A


    Emerging evidences have shown that one form of mental training-mindfulness meditation, can improve attention, emotion regulation and cognitive performance through changing brain activity and structural connectivity. However, whether and how the short-term mindfulness meditation alters large-scale brain networks are not well understood. Here, we applied a novel data-driven technique, the multivariate pattern analysis (MVPA) to resting-state fMRI (rsfMRI) data to identify changes in brain activity patterns and assess the neural mechanisms induced by a brief mindfulness training-integrative body-mind training (IBMT), which was previously reported in our series of randomized studies. Whole brain rsfMRI was performed on an undergraduate group who received 2 weeks of IBMT with 30 min per session (5 h training in total). Classifiers were trained on measures of functional connectivity in this fMRI data, and they were able to reliably differentiate (with 72% accuracy) patterns of connectivity from before vs. after the IBMT training. After training, an increase in positive functional connections (60 connections) were detected, primarily involving bilateral superior/middle occipital gyrus, bilateral frontale operculum, bilateral superior temporal gyrus, right superior temporal pole, bilateral insula, caudate and cerebellum. These results suggest that brief mental training alters the functional connectivity of large-scale brain networks at rest that may involve a portion of the neural circuitry supporting attention, cognitive and affective processing, awareness and sensory integration and reward processing.

  2. GPU-based large-scale visualization

    KAUST Repository

    Hadwiger, Markus


    Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size. The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections. You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization. We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous

  3. Developing Large-Scale Bayesian Networks by Composition (United States)

    National Aeronautics and Space Administration — In this paper, we investigate the use of Bayesian networks to construct large-scale diagnostic systems. In particular, we consider the development of large-scale...

  4. Distributed large-scale dimensional metrology new insights

    CERN Document Server

    Franceschini, Fiorenzo; Maisano, Domenico


    Focuses on the latest insights into and challenges of distributed large scale dimensional metrology Enables practitioners to study distributed large scale dimensional metrology independently Includes specific examples of the development of new system prototypes

  5. Large scale dynamics of protoplanetary discs (United States)

    Béthune, William


    Planets form in the gaseous and dusty disks orbiting young stars. These protoplanetary disks are dispersed in a few million years, being accreted onto the central star or evaporated into the interstellar medium. To explain the observed accretion rates, it is commonly assumed that matter is transported through the disk by turbulence, although the mechanism sustaining turbulence is uncertain. On the other side, irradiation by the central star could heat up the disk surface and trigger a photoevaporative wind, but thermal effects cannot account for the observed acceleration and collimation of the wind into a narrow jet perpendicular to the disk plane. Both issues can be solved if the disk is sensitive to magnetic fields. Weak fields lead to the magnetorotational instability, whose outcome is a state of sustained turbulence. Strong fields can slow down the disk, causing it to accrete while launching a collimated wind. However, the coupling between the disk and the neutral gas is done via electric charges, each of which is outnumbered by several billion neutral molecules. The imperfect coupling between the magnetic field and the neutral gas is described in terms of "non-ideal" effects, introducing new dynamical behaviors. This thesis is devoted to the transport processes happening inside weakly ionized and weakly magnetized accretion disks; the role of microphysical effects on the large-scale dynamics of the disk is of primary importance. As a first step, I exclude the wind and examine the impact of non-ideal effects on the turbulent properties near the disk midplane. I show that the flow can spontaneously organize itself if the ionization fraction is low enough; in this case, accretion is halted and the disk exhibits axisymmetric structures, with possible consequences on planetary formation. As a second step, I study the launching of disk winds via a global model of stratified disk embedded in a warm atmosphere. This model is the first to compute non-ideal effects from

  6. Algorithm 896: LSA: Algorithms for Large-Scale Optimization

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan


    Roč. 36, č. 3 (2009), 16-1-16-29 ISSN 0098-3500 R&D Projects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : algorithms * design * large-scale optimization * large-scale nonsmooth optimization * large-scale nonlinear least squares * large-scale nonlinear minimax * large-scale systems of nonlinear equations * sparse problems * partially separable problems * limited-memory methods * discrete Newton methods * quasi-Newton methods * primal interior-point methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.904, year: 2009

  7. Power suppression at large scales in string inflation

    Energy Technology Data Exchange (ETDEWEB)

    Cicoli, Michele [Dipartimento di Fisica ed Astronomia, Università di Bologna, via Irnerio 46, Bologna, 40126 (Italy); Downes, Sean; Dutta, Bhaskar, E-mail:, E-mail:, E-mail: [Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A and M University, College Station, TX, 77843-4242 (United States)


    We study a possible origin of the anomalous suppression of the power spectrum at large angular scales in the cosmic microwave background within the framework of explicit string inflationary models where inflation is driven by a closed string modulus parameterizing the size of the extra dimensions. In this class of models the apparent power loss at large scales is caused by the background dynamics which involves a sharp transition from a fast-roll power law phase to a period of Starobinsky-like slow-roll inflation. An interesting feature of this class of string inflationary models is that the number of e-foldings of inflation is inversely proportional to the string coupling to a positive power. Therefore once the string coupling is tuned to small values in order to trust string perturbation theory, enough e-foldings of inflation are automatically obtained without the need of extra tuning. Moreover, in the less tuned cases the sharp transition responsible for the power loss takes place just before the last 50-60 e-foldings of inflation. We illustrate these general claims in the case of Fibre Inflation where we study the strength of this transition in terms of the attractor dynamics, finding that it induces a pivot from a blue to a redshifted power spectrum which can explain the apparent large scale power loss. We compute the effects of this pivot for example cases and demonstrate how magnitude and duration of this effect depend on model parameters.

  8. Literature Review: Herbal Medicine Treatment after Large-Scale Disasters. (United States)

    Takayama, Shin; Kaneko, Soichiro; Numata, Takehiro; Kamiya, Tetsuharu; Arita, Ryutaro; Saito, Natsumi; Kikuchi, Akiko; Ohsawa, Minoru; Kohayagawa, Yoshitaka; Ishii, Tadashi


    Large-scale natural disasters, such as earthquakes, tsunamis, volcanic eruptions, and typhoons, occur worldwide. After the Great East Japan earthquake and tsunami, our medical support operation's experiences suggested that traditional medicine might be useful for treating the various symptoms of the survivors. However, little information is available regarding herbal medicine treatment in such situations. Considering that further disasters will occur, we performed a literature review and summarized the traditional medicine approaches for treatment after large-scale disasters. We searched PubMed and Cochrane Library for articles written in English, and Ichushi for those written in Japanese. Articles published before 31 March 2016 were included. Keywords "disaster" and "herbal medicine" were used in our search. Among studies involving herbal medicine after a disaster, we found two randomized controlled trials investigating post-traumatic stress disorder (PTSD), three retrospective investigations of trauma or common diseases, and seven case series or case reports of dizziness, pain, and psychosomatic symptoms. In conclusion, herbal medicine has been used to treat trauma, PTSD, and other symptoms after disasters. However, few articles have been published, likely due to the difficulty in designing high quality studies in such situations. Further study will be needed to clarify the usefulness of herbal medicine after disasters.

  9. Extending large-scale forest inventories to assess urban forests. (United States)

    Corona, Piermaria; Agrimi, Mariagrazia; Baffetta, Federica; Barbati, Anna; Chiriacò, Maria Vincenza; Fattorini, Lorenzo; Pompei, Enrico; Valentini, Riccardo; Mattioli, Walter


    Urban areas are continuously expanding today, extending their influence on an increasingly large proportion of woods and trees located in or nearby urban and urbanizing areas, the so-called urban forests. Although these forests have the potential for significantly improving the quality the urban environment and the well-being of the urban population, data to quantify the extent and characteristics of urban forests are still lacking or fragmentary on a large scale. In this regard, an expansion of the domain of multipurpose forest inventories like National Forest Inventories (NFIs) towards urban forests would be required. To this end, it would be convenient to exploit the same sampling scheme applied in NFIs to assess the basic features of urban forests. This paper considers approximately unbiased estimators of abundance and coverage of urban forests, together with estimators of the corresponding variances, which can be achieved from the first phase of most large-scale forest inventories. A simulation study is carried out in order to check the performance of the considered estimators under various situations involving the spatial distribution of the urban forests over the study area. An application is worked out on the data from the Italian NFI.

  10. Large-scale assembly of colloidal particles (United States)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  11. Pro website development and operations streamlining DevOps for large-scale websites

    CERN Document Server

    Sacks, Matthew


    Pro Website Development and Operations gives you the experience you need to create and operate a large-scale production website. Large-scale websites have their own unique set of problems regarding their design-problems that can get worse when agile methodologies are adopted for rapid results. Managing large-scale websites, deploying applications, and ensuring they are performing well often requires a full scale team involving the development and operations sides of the company-two departments that don't always see eye to eye. When departments struggle with each other, it adds unnecessary comp

  12. High school incompletion and childhood maltreatment among street-involved young people in Vancouver, Canada. (United States)

    Barker, Brittany; Kerr, Thomas; Dong, Huiru; Wood, Evan; DeBeck, Kora


    While the link between educational attainment and future health and wellness is well understood, little investigation has considered the potential impacts of distinct forms of childhood maltreatment on high school completion. In the present study, the relationship between five categories of childhood maltreatment (physical, emotional, and sexual abuse, and physical and emotional neglect) and completion of high school education were examined using the Childhood Trauma Questionnaire (CTQ). From September 2005 to May 2013, data were collected for the At-Risk Youth Study (ARYS), a cohort of street-involved young people who use illicit drugs in Vancouver, Canada. We used logistic regression to examine the relationship between childhood maltreatment and high school completion, while controlling for a range of potential confounding variables. Specifically, five separate models for each category of maltreatment and two combined models were employed to examine the relative associations between, and cumulative impact of, different forms of childhood maltreatment and educational attainment. Among 974 young people, 737 (76%) reported not completing high school. In separate multivariable analyses physical abuse, emotional abuse, physical neglect, and emotional neglect remained positively and independently associated with an incomplete high school education. In a combined multivariable model with all forms of childhood maltreatment considered together, emotional abuse (adjusted odds ratio = 2.08; 95% confidence interval: 1.51-2.86) was the only form of maltreatment that remained significantly associated with an incomplete high school education. The cumulative impact assessment indicated a moderate dose-dependent trend where the greater the number of different forms of childhood maltreatment the greater the risk of not completing a high school education. These findings point to the need for trauma-informed interventions to improve educational attainment among vulnerable young

  13. Can wide consultation help with setting priorities for large-scale biodiversity monitoring programs?

    Directory of Open Access Journals (Sweden)

    Frédéric Boivin

    Full Text Available Climate and other global change phenomena affecting biodiversity require monitoring to track ecosystem changes and guide policy and management actions. Designing a biodiversity monitoring program is a difficult task that requires making decisions that often lack consensus due to budgetary constrains. As monitoring programs require long-term investment, they also require strong and continuing support from all interested parties. As such, stakeholder consultation is key to identify priorities and make sound design decisions that have as much support as possible. Here, we present the results of a consultation conducted to serve as an aid for designing a large-scale biodiversity monitoring program for the province of Québec (Canada. The consultation took the form of a survey with 13 discrete choices involving tradeoffs in respect to design priorities and 10 demographic questions (e.g., age, profession. The survey was sent to thousands of individuals having expected interests and knowledge about biodiversity and was completed by 621 participants. Overall, consensuses were few and it appeared difficult to create a design fulfilling the priorities of the majority. Most participants wanted 1 a monitoring design covering the entire territory and focusing on natural habitats; 2 a focus on species related to ecosystem services, on threatened and on invasive species. The only demographic characteristic that was related to the type of prioritization was the declared level of knowledge in biodiversity (null to high, but even then the influence was quite small.

  14. Large-Scale Simulation Network Design Study (United States)


    involved in a single excercise ._Ultimately, the system will be able to be used byhundreds of individua , players’Finally, it attempts to portray a realistic...points act very much like terrain features in that as the vehicle moves, new points are generated in front of the vehicle, while old points are deleted as

  15. Thermal power generation projects ``Large Scale Solar Heating``; EU-Thermie-Projekte ``Large Scale Solar Heating``

    Energy Technology Data Exchange (ETDEWEB)

    Kuebler, R.; Fisch, M.N. [Steinbeis-Transferzentrum Energie-, Gebaeude- und Solartechnik, Stuttgart (Germany)


    The aim of this project is the preparation of the ``Large-Scale Solar Heating`` programme for an Europe-wide development of subject technology. The following demonstration programme was judged well by the experts but was not immediately (1996) accepted for financial subsidies. In November 1997 the EU-commission provided 1,5 million ECU which allowed the realisation of an updated project proposal. By mid 1997 a small project was approved, that had been requested under the lead of Chalmes Industriteteknik (CIT) in Sweden and is mainly carried out for the transfer of technology. (orig.) [Deutsch] Ziel dieses Vorhabens ist die Vorbereitung eines Schwerpunktprogramms `Large Scale Solar Heating`, mit dem die Technologie europaweit weiterentwickelt werden sollte. Das daraus entwickelte Demonstrationsprogramm wurde von den Gutachtern positiv bewertet, konnte jedoch nicht auf Anhieb (1996) in die Foerderung aufgenommen werden. Im November 1997 wurden von der EU-Kommission dann kurzfristig noch 1,5 Mio ECU an Foerderung bewilligt, mit denen ein aktualisierter Projektvorschlag realisiert werden kann. Bereits Mitte 1997 wurde ein kleineres Vorhaben bewilligt, das unter Federfuehrung von Chalmers Industriteknik (CIT) in Schweden beantragt worden war und das vor allem dem Technologietransfer dient. (orig.)

  16. Magnetization of fluid phonons and large-scale curvature perturbations

    CERN Document Server

    Giovannini, Massimo


    The quasinormal mode of a gravitating and magnetized fluid in a spatially flat, isotropic and homogeneous cosmological background is derived in the presence of the fluid sources of anisotropic stress and of the entropic fluctuations of the plasma. The obtained gauge-invariant description involves a system of two coupled differential equations whose physical content is analyzed in all the most relevant situations. The Cauchy problem of large-scale curvature perturbations during the radiation dominated stage of expansion can be neatly formulated and its general solution is shown to depend on five initial data assigned when the relevant physical wavelengths are larger than the particle horizon. The consequences of this approach are explored.

  17. Enabling large-scale viscoelastic calculations via neural network acceleration (United States)

    DeVries, Phoebe M. R.; Thompson, T. Ben; Meade, Brendan J.


    One of the most significant challenges involved in efforts to understand the effects of repeated earthquake cycle activity is the computational costs of large-scale viscoelastic earthquake cycle models. Computationally intensive viscoelastic codes must be evaluated at thousands of times and locations, and as a result, studies tend to adopt a few fixed rheological structures and model geometries and examine the predicted time-dependent deformation over short (learn a computationally efficient representation of viscoelastic solutions, at any time, location, and for a large range of rheological structures, allows these calculations to be done quickly and reliably, with high spatial and temporal resolutions. We demonstrate that this machine learning approach accelerates viscoelastic calculations by more than 50,000%. This magnitude of acceleration will enable the modeling of geometrically complex faults over thousands of earthquake cycles across wider ranges of model parameters and at larger spatial and temporal scales than have been previously possible.

  18. Bonus algorithm for large scale stochastic nonlinear programming problems

    CERN Document Server

    Diwekar, Urmila


    This book presents the details of the BONUS algorithm and its real world applications in areas like sensor placement in large scale drinking water networks, sensor placement in advanced power systems, water management in power systems, and capacity expansion of energy systems. A generalized method for stochastic nonlinear programming based on a sampling based approach for uncertainty analysis and statistical reweighting to obtain probability information is demonstrated in this book. Stochastic optimization problems are difficult to solve since they involve dealing with optimization and uncertainty loops. There are two fundamental approaches used to solve such problems. The first being the decomposition techniques and the second method identifies problem specific structures and transforms the problem into a deterministic nonlinear programming problem. These techniques have significant limitations on either the objective function type or the underlying distributions for the uncertain variables. Moreover, these ...

  19. Probes of large-scale structure in the universe (United States)

    Suto, Yasushi; Gorski, Krzysztof; Juszkiewicz, Roman; Silk, Joseph


    A general formalism is developed which shows that the gravitational instability theory for the origin of the large-scale structure of the universe is now capable of critically confronting observational results on cosmic background radiation angular anisotropies, large-scale bulk motions, and large-scale clumpiness in the galaxy counts. The results indicate that presently advocated cosmological models will have considerable difficulty in simultaneously explaining the observational results.

  20. Development of large-scale structure in the Universe

    CERN Document Server

    Ostriker, J P


    This volume grew out of the 1988 Fermi lectures given by Professor Ostriker, and is concerned with cosmological models that take into account the large scale structure of the universe. He starts with homogeneous isotropic models of the universe and then, by considering perturbations, he leads us to modern cosmological theories of the large scale, such as superconducting strings. This will be an excellent companion for all those interested in the cosmology and the large scale nature of the universe.

  1. Large-Scale Pattern Discovery in Music (United States)

    Bertin-Mahieux, Thierry

    This work focuses on extracting patterns in musical data from very large collections. The problem is split in two parts. First, we build such a large collection, the Million Song Dataset, to provide researchers access to commercial-size datasets. Second, we use this collection to study cover song recognition which involves finding harmonic patterns from audio features. Regarding the Million Song Dataset, we detail how we built the original collection from an online API, and how we encouraged other organizations to participate in the project. The result is the largest research dataset with heterogeneous sources of data available to music technology researchers. We demonstrate some of its potential and discuss the impact it already has on the field. On cover song recognition, we must revisit the existing literature since there are no publicly available results on a dataset of more than a few thousand entries. We present two solutions to tackle the problem, one using a hashing method, and one using a higher-level feature computed from the chromagram (dubbed the 2DFTM). We further investigate the 2DFTM since it has potential to be a relevant representation for any task involving audio harmonic content. Finally, we discuss the future of the dataset and the hope of seeing more work making use of the different sources of data that are linked in the Million Song Dataset. Regarding cover songs, we explain how this might be a first step towards defining a harmonic manifold of music, a space where harmonic similarities between songs would be more apparent.

  2. Autonomic Computing Paradigm For Large Scale Scientific And Engineering Applications (United States)

    Hariri, S.; Yang, J.; Zhang, Y.


    Large-scale distributed scientific applications are highly adaptive and heterogeneous in terms of their computational requirements. The computational complexity associated with each computational region or domain varies continuously and dramatically both in space and time throughout the whole life cycle of the application execution. Furthermore, the underlying distributed computing environment is similarly complex and dynamic in the availabilities and capacities of the computing resources. These challenges combined together make the current paradigms, which are based on passive components and static compositions, ineffectual. Autonomic Computing paradigm is an approach that efficiently addresses the complexity and dynamism of large scale scientific and engineering applications and realizes the self-management of these applications. In this presentation, we present an Autonomic Runtime Manager (ARM) that supports the development of autonomic applications. The ARM includes two modules: online monitoring and analysis module and autonomic planning and scheduling module. The ARM behaves as a closed-loop control system that dynamically controls and manages the execution of the applications at runtime. It regularly senses the state changes of both the applications and the underlying computing resources. It then uses these runtime information and prior knowledge about the application behavior and its physics to identify the appropriate solution methods as well as the required computing and storage resources. Consequently this approach enables us to develop autonomic applications, which are capable of self-management and self-optimization. We have developed and implemented the autonomic computing paradigms for several large scale applications such as wild fire simulations, simulations of flow through variably saturated geologic formations, and life sciences. The distributed wildfire simulation models the wildfire spread behavior by considering such factors as fuel

  3. Perceptions of a drug prevention public service announcement campaign among street-involved youth in Vancouver, Canada: a qualitative study. (United States)

    Ti, Lianlian; Fast, Danya; Small, William; Kerr, Thomas


    Due to the popularity of public service announcements (PSAs), as well as the broader health and social harms associated with illicit drug use, this study sought to investigate how drug prevention messages found in the Government of Canada's DrugsNot4Me campaign were understood, experienced, and engaged with among a group of street-involved young people in Vancouver, Canada. Qualitative interviews were conducted with 25 individuals enrolled in the At-Risk Youth Study, and a thematic analysis was conducted. Findings indicate that the campaign's messages neither resonated with "at-risk youth", nor provided information or resources for support. In some cases, the messaging exacerbated the social suffering experienced by these individuals. This study underscores the importance of rigorous evaluation of PSAs and the need to consider diverting funds allocated to drug prevention campaigns to social services that can meaningfully address the structural drivers of drug-related harms among vulnerable youth populations.

  4. Ultra-large-scale electronic structure theory and numerical algorithm


    Hoshi, Takeo


    This article is composed of two parts; In the first part (Sec. 1), the ultra-large-scale electronic structure theory is reviewed for (i) its fundamental numerical algorithm and (ii) its role in nano-material science. The second part (Sec. 2) is devoted to the mathematical foundation of the large-scale electronic structure theory and their numerical aspects.

  5. Large-Scale Agriculture and Outgrower Schemes in Ethiopia

    DEFF Research Database (Denmark)

    Wendimu, Mengistu Assefa

    were the main reasons for the failure of large-scale jatropha plantations in Ethiopia. The findings in Chapter 3 show that when the use of family labour is combined with easy access to credit and technology, outgrowers on average achieve higher productivity than that obtained on large-scale plantations...

  6. Advances in Modelling of Large Scale Coastal Evolution

    NARCIS (Netherlands)

    Stive, M.J.F.; De Vriend, H.J.


    The attention for climate change impact on the world's coastlines has established large scale coastal evolution as a topic of wide interest. Some more recent advances in this field, focusing on the potential of mathematical models for the prediction of large scale coastal evolution, are discussed.

  7. Large-scale Agricultural Land Acquisitions in West Africa | IDRC ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Large-scale Agricultural Land Acquisitions in West Africa. As agriculture becomes more strategically important in the world, large-scale land acquisition - or "land grabbing" - is a growing concern for West Africa. Recent increases in commodity prices have led some governments and private investors to purchase or lease ...

  8. Methodological approaches to comparing information about bicycle accidents internationally: a case study involving Canada and Germany. (United States)

    Juhra, Christian; Wieskötter, Britta; Bellwood, Paule; von Below, Ariane; Fyfe, Murray; Salkeld, Sonia; Borycki, Elizabeth; Kushniruk, Andre


    The use of bicycles as a mean of healthy and eco-friendly transportation is currently actively promoted in many industrialized countries. However, the number of severe bicycle accidents rose significantly in Germany and Canada in 2011. In order to identify risk factors for bicycle accidents and possible means of prevention, a study was initiated that analyses bicycle accidents from selected regions in both countries. Due to different healthcare systems and regulations, the data must be selected in different ways in each country before it can be analyzed. Data is collected by means of questionnaires in Germany and using hybrid electronic-paper records in Canada. Using this method, all relevant data can be collected in both countries.

  9. Safety aspects of large-scale combustion of hydrogen

    Energy Technology Data Exchange (ETDEWEB)

    Edeskuty, F.J.; Haugh, J.J.; Thompson, R.T.


    Recent hydrogen-safety investigations have studied the possible large-scale effects from phenomena such as the accumulation of combustible hydrogen-air mixtures in large, confined volumes. Of particular interest are safe methods for the disposal of the hydrogen and the pressures which can arise from its confined combustion. Consequently, tests of the confined combustion of hydrogen-air mixtures were conducted in a 2100 m/sup 3/ volume. These tests show that continuous combustion, as the hydrogen is generated, is a safe method for its disposal. It also has been seen that, for hydrogen concentrations up to 13 vol %, it is possible to predict maximum pressures that can occur upon ignition of premixed hydrogen-air atmospheres. In addition information has been obtained concerning the survivability of the equipment that is needed to recover from an accident involving hydrogen combustion. An accident that involved the inadvertent mixing of hydrogen and oxygen gases in a tube trailer gave evidence that under the proper conditions hydrogen combustion can transit to a detonation. If detonation occurs the pressures which can be experienced are much higher although short in duration.

  10. Large scale and big data processing and management

    CERN Document Server

    Sakr, Sherif


    Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments.The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-bas

  11. Large Scale Applications of HTS in New Zealand (United States)

    Wimbush, Stuart C.

    New Zealand has one of the longest-running and most consistently funded (relative to GDP) programmes in high temperature superconductor (HTS) development and application worldwide. As a consequence, it has a sustained breadth of involvement in HTS technology development stretching from the materials discovery right through to burgeoning commercial exploitation. This review paper outlines the present large scale projects of the research team at the newly-established Robinson Research Institute of Victoria University of Wellington. These include the construction and grid-based testing of a three-phase 1 MVA 2G HTS distribution transformer utilizing Roebel cable for its high-current secondary windings and the development of a cryogen-free conduction-cooled 1.5 T YBCO-based human extremity magnetic resonance imaging system. Ongoing activities supporting applications development such as low-temperature full-current characterization of commercial superconducting wires and the implementation of inductive flux-pump technologies for efficient brushless coil excitation in superconducting magnets and rotating machines are also described.

  12. Reorganizing Complex Network to Improve Large-Scale Multiagent Teamwork

    Directory of Open Access Journals (Sweden)

    Yang Xu


    Full Text Available Large-scale multiagent teamwork has been popular in various domains. Similar to human society infrastructure, agents only coordinate with some of the others, with a peer-to-peer complex network structure. Their organization has been proven as a key factor to influence their performance. To expedite team performance, we have analyzed that there are three key factors. First, complex network effects may be able to promote team performance. Second, coordination interactions coming from their sources are always trying to be routed to capable agents. Although they could be transferred across the network via different paths, their sources and sinks depend on the intrinsic nature of the team which is irrelevant to the network connections. In addition, the agents involved in the same plan often form a subteam and communicate with each other more frequently. Therefore, if the interactions between agents can be statistically recorded, we are able to set up an integrated network adjustment algorithm by combining the three key factors. Based on our abstracted teamwork simulations and the coordination statistics, we implemented the adaptive reorganization algorithm. The experimental results briefly support our design that the reorganized network is more capable of coordinating heterogeneous agents.

  13. Large scale multiplex PCR improves pathogen detection by DNA microarrays

    Directory of Open Access Journals (Sweden)

    Krönke Martin


    Full Text Available Abstract Background Medium density DNA microchips that carry a collection of probes for a broad spectrum of pathogens, have the potential to be powerful tools for simultaneous species identification, detection of virulence factors and antimicrobial resistance determinants. However, their widespread use in microbiological diagnostics is limited by the problem of low pathogen numbers in clinical specimens revealing relatively low amounts of pathogen DNA. Results To increase the detection power of a fluorescence-based prototype-microarray designed to identify pathogenic microorganisms involved in sepsis, we propose a large scale multiplex PCR (LSplex PCR for amplification of several dozens of gene-segments of 9 pathogenic species. This protocol employs a large set of primer pairs, potentially able to amplify 800 different gene segments that correspond to the capture probes spotted on the microarray. The LSplex protocol is shown to selectively amplify only the gene segments corresponding to the specific pathogen present in the analyte. Application of LSplex increases the microarray detection of target templates by a factor of 100 to 1000. Conclusion Our data provide a proof of principle for the improvement of detection of pathogen DNA by microarray hybridization by using LSplex PCR.

  14. Data Sketching for Large-Scale Kalman Filtering (United States)

    Berberidis, Dimitris; Giannakis, Georgios B.


    In an age of exponentially increasing data generation, performing inference tasks by utilizing the available information in its entirety is not always an affordable option. The present paper puts forth approaches to render tracking of large-scale dynamic processes via a Kalman filter affordable, by processing a reduced number of data. Three distinct methods are introduced for reducing the number of data involved in the correction step of the filter. Towards this goal, the first two methods employ random projections and innovation-based censoring to effect dimensionality reduction and measurement selection respectively. The third method achieves reduced complexity by leveraging sequential processing of observations and selecting a few informative updates based on an information-theoretic metric. Simulations on synthetic data, compare the proposed methods with competing alternatives, and corroborate their efficacy in terms of estimation accuracy over complexity reduction. Finally, monitoring large networks is considered as an application domain, with the proposed methods tested on Kronecker graphs to evaluate their efficiency in tracking traffic matrices and time-varying link costs.

  15. Episodic memory in aspects of large-scale brain networks

    Directory of Open Access Journals (Sweden)

    Woorim eJeong


    Full Text Available Understanding human episodic memory in aspects of large-scale brain networks has become one of the central themes in neuroscience over the last decade. Traditionally, episodic memory was regarded as mostly relying on medial temporal lobe (MTL structures. However, recent studies have suggested involvement of more widely distributed cortical network and the importance of its interactive roles in the memory process. Both direct and indirect neuro-modulations of the memory network have been tried in experimental treatments of memory disorders. In this review, we focus on the functional organization of the MTL and other neocortical areas in episodic memory. Task-related neuroimaging studies together with lesion studies suggested that specific sub-regions of the MTL are responsible for specific components of memory. However, recent studies have emphasized that connectivity within MTL structures and even their network dynamics with other cortical areas are essential in the memory process. Resting-state functional network studies also have revealed that memory function is subserved by not only the MTL system but also a distributed network, particularly the default-mode network. Furthermore, researchers have begun to investigate memory networks throughout the entire brain not restricted to the specific resting-state network. Altered patterns of functional connectivity among distributed brain regions were observed in patients with memory impairments. Recently, studies have shown that brain stimulation may impact memory through modulating functional networks, carrying future implications of a novel interventional therapy for memory impairment.

  16. Episodic memory in aspects of large-scale brain networks (United States)

    Jeong, Woorim; Chung, Chun Kee; Kim, June Sic


    Understanding human episodic memory in aspects of large-scale brain networks has become one of the central themes in neuroscience over the last decade. Traditionally, episodic memory was regarded as mostly relying on medial temporal lobe (MTL) structures. However, recent studies have suggested involvement of more widely distributed cortical network and the importance of its interactive roles in the memory process. Both direct and indirect neuro-modulations of the memory network have been tried in experimental treatments of memory disorders. In this review, we focus on the functional organization of the MTL and other neocortical areas in episodic memory. Task-related neuroimaging studies together with lesion studies suggested that specific sub-regions of the MTL are responsible for specific components of memory. However, recent studies have emphasized that connectivity within MTL structures and even their network dynamics with other cortical areas are essential in the memory process. Resting-state functional network studies also have revealed that memory function is subserved by not only the MTL system but also a distributed network, particularly the default-mode network (DMN). Furthermore, researchers have begun to investigate memory networks throughout the entire brain not restricted to the specific resting-state network (RSN). Altered patterns of functional connectivity (FC) among distributed brain regions were observed in patients with memory impairments. Recently, studies have shown that brain stimulation may impact memory through modulating functional networks, carrying future implications of a novel interventional therapy for memory impairment. PMID:26321939

  17. Large-Scale Spray Releases: Additional Aerosol Test Results

    Energy Technology Data Exchange (ETDEWEB)

    Daniel, Richard C.; Gauglitz, Phillip A.; Burns, Carolyn A.; Fountain, Matthew S.; Shimskey, Rick W.; Billing, Justin M.; Bontha, Jagannadha R.; Kurath, Dean E.; Jenks, Jeromy WJ; MacFarlan, Paul J.; Mahoney, Lenna A.


    One of the events postulated in the hazard analysis for the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak event involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids that behave as a Newtonian fluid. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and in processing facilities across the DOE complex. To expand the data set upon which the WTP accident and safety analyses were based, an aerosol spray leak testing program was conducted by Pacific Northwest National Laboratory (PNNL). PNNL’s test program addressed two key technical areas to improve the WTP methodology (Larson and Allen 2010). The first technical area was to quantify the role of slurry particles in small breaches where slurry particles may plug the hole and prevent high-pressure sprays. The results from an effort to address this first technical area can be found in Mahoney et al. (2012a). The second technical area was to determine aerosol droplet size distribution and total droplet volume from prototypic breaches and fluids, including sprays from larger breaches and sprays of slurries for which literature data are mostly absent. To address the second technical area, the testing program collected aerosol generation data at two scales, commonly referred to as small-scale and large-scale testing. The small-scale testing and resultant data are described in Mahoney et al. (2012b), and the large-scale testing and resultant data are presented in Schonewill et al. (2012). In tests at both scales, simulants were used

  18. Food security through large scale investments in agriculture (United States)

    Rulli, M.; D'Odorico, P.


    Most of the human appropriation of freshwater resources is for food production. There is some concern that in the near future the finite freshwater resources available on Earth might not be sufficient to meet the increasing human demand for agricultural products. In the late 1700s Malthus argued that in the long run the humanity would not have enough resources to feed itself. Malthus' analysis, however, did not account for the emergence of technological innovations that could increase the rate of food production. The modern and contemporary history has seen at least three major technological advances that have increased humans' access to food, namely, the industrial revolution, the green revolution, and the intensification of global trade. Here we argue that a fourth revolution has just started to happen. It involves foreign direct investments in agriculture, which intensify the crop yields of potentially highly productive agricultural lands by introducing the use of more modern technologies. The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions for commercial farming will bring the technology required to close the existing yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of verified land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with large scale land acquisitions. We

  19. Large-scale innovation and change in UK higher education

    National Research Council Canada - National Science Library

    Brown, Stephen


    .... The paper argues that collaborative approaches to project management offer greater prospects of effective large-scale change in universities than either management-driven top-down or more champion...

  20. Personalized Opportunistic Computing for CMS at Large Scale

    CERN Multimedia

    CERN. Geneva


    **Douglas Thain** is an Associate Professor of Computer Science and Engineering at the University of Notre Dame, where he designs large scale distributed computing systems to power the needs of advanced science and...

  1. Amplification of large-scale magnetic field in nonhelical magnetohydrodynamics

    KAUST Repository

    Kumar, Rohit


    It is typically assumed that the kinetic and magnetic helicities play a crucial role in the growth of large-scale dynamo. In this paper, we demonstrate that helicity is not essential for the amplification of large-scale magnetic field. For this purpose, we perform nonhelical magnetohydrodynamic (MHD) simulation, and show that the large-scale magnetic field can grow in nonhelical MHD when random external forcing is employed at scale 1/10 the box size. The energy fluxes and shell-to-shell transfer rates computed using the numerical data show that the large-scale magnetic energy grows due to the energy transfers from the velocity field at the forcing scales.

  2. Needs, opportunities, and options for large scale systems research

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, G.L.


    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  3. Reorganizing Complex Network to Improve Large-Scale Multiagent Teamwork

    National Research Council Canada - National Science Library

    Yang Xu; Pengfei Liu; Xiang Li


      Large-scale multiagent teamwork has been popular in various domains. Similar to human society infrastructure, agents only coordinate with some of the others, with a peer-to-peer complex network structure...

  4. Large-Scale Troughs on 4 Vesta: Observations and Analysis (United States)

    Buczkowski, D. L.; Kahn, E.; Barnouin, O.; Wyrick, D. Y.; Gaskell, R. W.; Yingst, R. A.; Williams, D. A.; Garry, W. B.; Le Corre, L.; Nathues, A.; Scully, J. E. C.; Blewett, D.; Hiesinger, H.; Mest, S.; Schenk, P. M.; Schmedemann, N.; Krohn, K.; Jaumann, R.; Raymond, C. A.; Pieters, C. M.; Roatsch, T.; Preusker, F.; Russell, C. T.


    Images of Vesta taken by the Dawn Framing Camera reveal the presence of large- scale structural features on the surface of the asteroid. Analysis of these structures supports models for the formation of the south polar basins.

  5. A large-scale peer teaching programme - acceptance and benefit. (United States)

    Schuetz, Elisabeth; Obirei, Barbara; Salat, Daniela; Scholz, Julia; Hann, Dagmar; Dethleffsen, Kathrin


    The involvement of students in the embodiment of university teaching through peer-assisted learning formats is commonly applied. Publications on this topic exclusively focus on strictly defined situations within the curriculum and selected target groups. This study, in contrast, presents and evaluates a large-scale structured and quality-assured peer teaching programme, which offers diverse and targeted courses throughout the preclinical part of the medical curriculum. The large-scale peer teaching programme consists of subject specific and interdisciplinary tutorials that address all scientific, physiological and anatomic subjects of the preclinical curriculum as well as tutorials with contents exceeding the formal curriculum. In the study year 2013/14 a total of 1,420 lessons were offered as part of the programme. Paper-based evaluations were conducted over the full range of courses. Acceptance and benefit of this peer teaching programme were evaluated in a retrospective study covering the period 2012 to 2014. Usage of tutorials by students who commenced their studies in 2012/13 (n=959) was analysed from 2012 till 2014. Based on the results of 13 first assessments in the preclinical subjects anatomy, biochemistry and physiology, the students were assigned to one of five groups. These groups were compared according to participation in the tutorials. To investigate the benefit of tutorials of the peer teaching programme, the results of biochemistry re-assessments of participants and non-participants of tutorials in the years 2012 till 2014 (n=188, 172 and 204, respectively) were compared using Kolmogorov-Smirnov- and Chi-square tests as well as the effect size Cohen's d. Almost 70 % of the students attended the voluntary additional programme during their preclinical studies. The students participating in the tutorials had achieved different levels of proficiency in first assessments. The acceptance of different kinds of tutorials appears to correlate with their

  6. Explosive Concrete Spalling during Large-Scale Fire Resistance Tests


    Maluk, Cristian; Bisby, Luke; Giovanni P. Terrasi


    This paper presents a comprehensive investigation of explosive heat-induced spalling observed during a set of large-scale fire resistance tests (or standard furnace tests) on prestressed concrete slabs. The study, based on data from large-scale tests, examines the influence of numerous design parameters in the occurrence of spalling (age of concrete, inclusion of polypropylene fibres, depth of the slab, and prestressing level). Furthermore, a careful thermal analysis of the tested slabs is pr...

  7. Aggregation algorithm towards large-scale Boolean network analysis


    Zhao, Y.; Kim, J.; Filippone, M.


    The analysis of large-scale Boolean network dynamics is of great importance in understanding complex phenomena where systems are characterized by a large number of components. The computational cost to reveal the number of attractors and the period of each attractor increases exponentially as the number of nodes in the networks increases. This paper presents an efficient algorithm to find attractors for medium to large-scale networks. This is achieved by analyzing subnetworks within the netwo...

  8. Boundary-Layer Bypass Transition Over Large-Scale Bodies (United States)


    AFRL-AFOSR-UK-TR-2017-0007 Boundary - layer bypass transition over large-scale bodies Pierre Ricco UNIVERSITY OF SHEFFIELD, DEPARTMENT OF PSYCHOLOGY...REPORT TYPE Final 3. DATES COVERED (From - To) 01 Sep 2013 to 31 Aug 2016 4. TITLE AND SUBTITLE Boundary - layer bypass transition over large-scale...shape of the streamwise velocity profile compared to the flat-plate boundary layer . The research showed that the streamwise wavenumber plays a key role

  9. Balancing modern Power System with large scale of wind power


    Basit, Abdul; Altin, Müfit; Hansen, Anca Daniela; Sørensen, Poul Ejnar


    Power system operators must ensure robust, secure and reliable power system operation even with a large scale integration of wind power. Electricity generated from the intermittent wind in large propor-tion may impact on the control of power system balance and thus deviations in the power system frequency in small or islanded power systems or tie line power flows in interconnected power systems. Therefore, the large scale integration of wind power into the power system strongly concerns the s...

  10. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey J.


    IThe National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,000 users and hosting some 550 projects that involve nearly 700 codes for a wide variety of scientific disciplines. In addition to large-scale computing resources NERSC provides critical staff support and expertise to help scientists make the most efficient use of these resources to advance the scientific mission of the Office of Science. In May 2011, NERSC, DOE’s Office of Advanced Scientific Computing Research (ASCR) and DOE’s Office of Nuclear Physics (NP) held a workshop to characterize HPC requirements for NP research over the next three to five years. The effort is part of NERSC’s continuing involvement in anticipating future user needs and deploying necessary resources to meet these demands. The workshop revealed several key requirements, in addition to achieving its goal of characterizing NP computing. The key requirements include: 1. Larger allocations of computational resources at NERSC; 2. Visualization and analytics support; and 3. Support at NERSC for the unique needs of experimental nuclear physicists. This report expands upon these key points and adds others. The results are based upon representative samples, called “case studies,” of the needs of science teams within NP. The case studies were prepared by NP workshop participants and contain a summary of science goals, methods of solution, current and future computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, “multi-core” environment that is expected to dominate HPC architectures over the next few years. The report also includes a section with NERSC responses to the workshop findings. NERSC has many initiatives already underway that address key workshop findings and all of the action items are aligned with NERSC strategic plans.

  11. Output regulation of large-scale hydraulic networks with minimal steady state power consumption

    NARCIS (Netherlands)

    Jensen, Tom Nørgaard; Wisniewski, Rafał; De Persis, Claudio; Kallesøe, Carsten Skovmose


    An industrial case study involving a large-scale hydraulic network is examined. The hydraulic network underlies a district heating system, with an arbitrary number of end-users. The problem of output regulation is addressed along with a optimization criterion for the control. The fact that the

  12. Parallel Computational Fluid Dynamics 2007 : Implementations and Experiences on Large Scale and Grid Computing

    CERN Document Server


    At the 19th Annual Conference on Parallel Computational Fluid Dynamics held in Antalya, Turkey, in May 2007, the most recent developments and implementations of large-scale and grid computing were presented. This book, comprised of the invited and selected papers of this conference, details those advances, which are of particular interest to CFD and CFD-related communities. It also offers the results related to applications of various scientific and engineering problems involving flows and flow-related topics. Intended for CFD researchers and graduate students, this book is a state-of-the-art presentation of the relevant methodology and implementation techniques of large-scale computing.

  13. A link between neuroscience and informatics: large-scale modeling of memory processes. (United States)

    Horwitz, Barry; Smith, Jason F


    Utilizing advances in functional neuroimaging and computational neural modeling, neuroscientists have increasingly sought to investigate how distributed networks, composed of functionally defined subregions, combine to produce cognition. Large-scale, biologically realistic neural models, which integrate data from cellular, regional, whole brain, and behavioral sources, delineate specific hypotheses about how these interacting neural populations might carry out high-level cognitive tasks. In this review, we discuss neuroimaging, neural modeling, and the utility of large-scale biologically realistic models using modeling of short-term memory as an example. We present a sketch of the data regarding the neural basis of short-term memory from non-human electrophysiological, computational and neuroimaging perspectives, highlighting the multiple interacting brain regions believed to be involved. Through a review of several efforts, including our own, to combine neural modeling and neuroimaging data, we argue that large scale neural models provide specific advantages in understanding the distributed networks underlying cognition and behavior.

  14. Ideal and actual involvement of community pharmacists in health promotion and prevention: a cross-sectional study in Quebec, Canada

    Directory of Open Access Journals (Sweden)

    Laliberté Marie-Claude


    Full Text Available Abstract Background An increased interest is observed in broadening community pharmacists' role in public health. To date, little information has been gathered in Canada on community pharmacists' perceptions of their role in health promotion and prevention; however, such data are essential to the development of public-health programs in community pharmacy. A cross-sectional study was therefore conducted to explore the perceptions of community pharmacists in urban and semi-urban areas regarding their ideal and actual levels of involvement in providing health-promotion and prevention services and the barriers to such involvement. Methods Using a five-step modified Dillman's tailored design method, a questionnaire with 28 multiple-choice or open-ended questions (11 pages plus a cover letter was mailed to a random sample of 1,250 pharmacists out of 1,887 community pharmacists practicing in Montreal (Quebec, Canada and surrounding areas. It included questions on pharmacists' ideal level of involvement in providing health-promotion and preventive services; which services were actually offered in their pharmacy, the employees involved, the frequency, and duration of the services; the barriers to the provision of these services in community pharmacy; their opinion regarding the most appropriate health professionals to provide them; and the characteristics of pharmacists, pharmacies and their clientele. Results In all, 571 out of 1,234 (46.3% eligible community pharmacists completed and returned the questionnaire. Most believed they should be very involved in health promotion and prevention, particularly in smoking cessation (84.3%; screening for hypertension (81.8%, diabetes (76.0% and dyslipidemia (56.9%; and sexual health (61.7% to 89.1%; however, fewer respondents reported actually being very involved in providing such services (5.7% [lifestyle, including smoking cessation], 44.5%, 34.8%, 6.5% and 19.3%, respectively. The main barriers to the

  15. The Internet As a Large-Scale Complex System (United States)

    Park, Kihong; Willinger, Walter


    The Internet may be viewed as a "complex system" with diverse features and many components that can give rise to unexpected emergent phenomena, revealing much about its own engineering. This book brings together chapter contributions from a workshop held at the Santa Fe Institute in March 2001. This volume captures a snapshot of some features of the Internet that may be fruitfully approached using a complex systems perspective, meaning using interdisciplinary tools and methods to tackle the subject area. The Internet penetrates the socioeconomic fabric of everyday life; a broader and deeper grasp of the Internet may be needed to meet the challenges facing the future. The resulting empirical data have already proven to be invaluable for gaining novel insights into the network's spatio-temporal dynamics, and can be expected to become even more important when tryin to explain the Internet's complex and emergent behavior in terms of elementary networking-based mechanisms. The discoveries of fractal or self-similar network traffic traces, power-law behavior in network topology and World Wide Web connectivity are instances of unsuspected, emergent system traits. Another important factor at the heart of fair, efficient, and stable sharing of network resources is user behavior. Network systems, when habited by selfish or greedy users, take on the traits of a noncooperative multi-party game, and their stability and efficiency are integral to understanding the overall system and its dynamics. Lastly, fault-tolerance and robustness of large-scale network systems can exhibit spatial and temporal correlations whose effective analysis and management may benefit from rescaling techniques applied in certain physical and biological systems. The present book will bring together several of the leading workers involved in the analysis of complex systems with the future development of the Internet.

  16. Large-scale machine learning for metagenomics sequence classification. (United States)

    Vervier, Kévin; Mahé, Pierre; Tournoud, Maud; Veyrieras, Jean-Baptiste; Vert, Jean-Philippe


    Metagenomics characterizes the taxonomic diversity of microbial communities by sequencing DNA directly from an environmental sample. One of the main challenges in metagenomics data analysis is the binning step, where each sequenced read is assigned to a taxonomic clade. Because of the large volume of metagenomics datasets, binning methods need fast and accurate algorithms that can operate with reasonable computing requirements. While standard alignment-based methods provide state-of-the-art performance, compositional approaches that assign a taxonomic class to a DNA read based on the k-mers it contains have the potential to provide faster solutions. We propose a new rank-flexible machine learning-based compositional approach for taxonomic assignment of metagenomics reads and show that it benefits from increasing the number of fragments sampled from reference genome to tune its parameters, up to a coverage of about 10, and from increasing the k-mer size to about 12. Tuning the method involves training machine learning models on about 10(8) samples in 10(7) dimensions, which is out of reach of standard softwares but can be done efficiently with modern implementations for large-scale machine learning. The resulting method is competitive in terms of accuracy with well-established alignment and composition-based tools for problems involving a small to moderate number of candidate species and for reasonable amounts of sequencing errors. We show, however, that machine learning-based compositional approaches are still limited in their ability to deal with problems involving a greater number of species and more sensitive to sequencing errors. We finally show that the new method outperforms the state-of-the-art in its ability to classify reads from species of lineage absent from the reference database and confirm that compositional approaches achieve faster prediction times, with a gain of 2-17 times with respect to the BWA-MEM short read mapper, depending on the number of

  17. Seismic safety in conducting large-scale blasts (United States)

    Mashukov, I. V.; Chaplygin, V. V.; Domanov, V. P.; Semin, A. A.; Klimkin, M. A.


    In mining enterprises to prepare hard rocks for excavation a drilling and blasting method is used. With the approach of mining operations to settlements the negative effect of large-scale blasts increases. To assess the level of seismic impact of large-scale blasts the scientific staff of Siberian State Industrial University carried out expertise for coal mines and iron ore enterprises. Determination of the magnitude of surface seismic vibrations caused by mass explosions was performed using seismic receivers, an analog-digital converter with recording on a laptop. The registration results of surface seismic vibrations during production of more than 280 large-scale blasts at 17 mining enterprises in 22 settlements are presented. The maximum velocity values of the Earth’s surface vibrations are determined. The safety evaluation of seismic effect was carried out according to the permissible value of vibration velocity. For cases with exceedance of permissible values recommendations were developed to reduce the level of seismic impact.

  18. Balancing modern Power System with large scale of wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Altin, Müfit; Hansen, Anca Daniela


    Power system operators must ensure robust, secure and reliable power system operation even with a large scale integration of wind power. Electricity generated from the intermittent wind in large propor-tion may impact on the control of power system balance and thus deviations in the power system...... frequency in small or islanded power systems or tie line power flows in interconnected power systems. Therefore, the large scale integration of wind power into the power system strongly concerns the secure and stable grid operation. To ensure the stable power system operation, the evolving power system has...... to be analysed with improved analytical tools and techniques. This paper proposes techniques for the active power balance control in future power systems with the large scale wind power integration, where power balancing model provides the hour-ahead dispatch plan with reduced planning horizon and the real time...

  19. Large-scale networks in engineering and life sciences

    CERN Document Server

    Findeisen, Rolf; Flockerzi, Dietrich; Reichl, Udo; Sundmacher, Kai


    This edited volume provides insights into and tools for the modeling, analysis, optimization, and control of large-scale networks in the life sciences and in engineering. Large-scale systems are often the result of networked interactions between a large number of subsystems, and their analysis and control are becoming increasingly important. The chapters of this book present the basic concepts and theoretical foundations of network theory and discuss its applications in different scientific areas such as biochemical reactions, chemical production processes, systems biology, electrical circuits, and mobile agents. The aim is to identify common concepts, to understand the underlying mathematical ideas, and to inspire discussions across the borders of the various disciplines.  The book originates from the interdisciplinary summer school “Large Scale Networks in Engineering and Life Sciences” hosted by the International Max Planck Research School Magdeburg, September 26-30, 2011, and will therefore be of int...

  20. Traffic assignment models in large-scale applications

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær

    focuses on large-scale applications and contributes with methods to actualise the true potential of disaggregate models. To achieve this target, contributions are given to several components of traffic assignment modelling, by (i) enabling the utilisation of the increasingly available data sources...... on individual behaviour in the model specification, (ii) proposing a method to use disaggregate Revealed Preference (RP) data to estimate utility functions and provide evidence on the value of congestion and the value of reliability, (iii) providing a method to account for individual mis...... is essential in the development and validation of realistic models for large-scale applications. Nowadays, modern technology facilitates easy access to RP data and allows large-scale surveys. The resulting datasets are, however, usually very large and hence data processing is necessary to extract the pieces...

  1. PKI security in large-scale healthcare networks. (United States)

    Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos


    During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.

  2. Large Scale Processes and Extreme Floods in Brazil (United States)

    Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.


    Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).

  3. Magnetic Helicity and Large Scale Magnetic Fields: A Primer (United States)

    Blackman, Eric G.


    Magnetic fields of laboratory, planetary, stellar, and galactic plasmas commonly exhibit significant order on large temporal or spatial scales compared to the otherwise random motions within the hosting system. Such ordered fields can be measured in the case of planets, stars, and galaxies, or inferred indirectly by the action of their dynamical influence, such as jets. Whether large scale fields are amplified in situ or a remnant from previous stages of an object's history is often debated for objects without a definitive magnetic activity cycle. Magnetic helicity, a measure of twist and linkage of magnetic field lines, is a unifying tool for understanding large scale field evolution for both mechanisms of origin. Its importance stems from its two basic properties: (1) magnetic helicity is typically better conserved than magnetic energy; and (2) the magnetic energy associated with a fixed amount of magnetic helicity is minimized when the system relaxes this helical structure to the largest scale available. Here I discuss how magnetic helicity has come to help us understand the saturation of and sustenance of large scale dynamos, the need for either local or global helicity fluxes to avoid dynamo quenching, and the associated observational consequences. I also discuss how magnetic helicity acts as a hindrance to turbulent diffusion of large scale fields, and thus a helper for fossil remnant large scale field origin models in some contexts. I briefly discuss the connection between large scale fields and accretion disk theory as well. The goal here is to provide a conceptual primer to help the reader efficiently penetrate the literature.

  4. Topology Optimization of Large Scale Stokes Flow Problems

    DEFF Research Database (Denmark)

    Aage, Niels; Poulsen, Thomas Harpsøe; Gersborg-Hansen, Allan


    This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs.......This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs....

  5. [Issues of large scale tissue culture of medicinal plant]. (United States)

    Lv, Dong-Mei; Yuan, Yuan; Zhan, Zhi-Lai


    In order to increase the yield and quality of the medicinal plant and enhance the competitive power of industry of medicinal plant in our country, this paper analyzed the status, problem and countermeasure of the tissue culture of medicinal plant on large scale. Although the biotechnology is one of the most efficient and promising means in production of medicinal plant, it still has problems such as stability of the material, safety of the transgenic medicinal plant and optimization of cultured condition. Establishing perfect evaluation system according to the characteristic of the medicinal plant is the key measures to assure the sustainable development of the tissue culture of medicinal plant on large scale.

  6. Prospects for large scale electricity storage in Denmark

    DEFF Research Database (Denmark)

    Krog Ekman, Claus; Jensen, Søren Højgaard


    with high wind power penetration. This paper presents a review of the electricity storage technologies relevant for large power systems. The paper also presents an estimation of the economic feasibility of electricity storage using the west Danish power market area as a case.......In a future power systems with additional wind power capacity there will be an increased need for large scale power management as well as reliable balancing and reserve capabilities. Different technologies for large scale electricity storage provide solutions to the different challenges arising...

  7. Large-Scale Graph Processing Analysis using Supercomputer Cluster (United States)

    Vildario, Alfrido; Fitriyani; Nugraha Nurkahfi, Galih


    Graph implementation is widely use in various sector such as automotive, traffic, image processing and many more. They produce graph in large-scale dimension, cause the processing need long computational time and high specification resources. This research addressed the analysis of implementation large-scale graph using supercomputer cluster. We impelemented graph processing by using Breadth-First Search (BFS) algorithm with single destination shortest path problem. Parallel BFS implementation with Message Passing Interface (MPI) used supercomputer cluster at High Performance Computing Laboratory Computational Science Telkom University and Stanford Large Network Dataset Collection. The result showed that the implementation give the speed up averages more than 30 times and eficiency almost 90%.

  8. Generation Expansion Planning Considering Integrating Large-scale Wind Generation

    DEFF Research Database (Denmark)

    Zhang, Chunyu; Ding, Yi; Østergaard, Jacob


    Generation expansion planning (GEP) is the problem of finding the optimal strategy to plan the Construction of new generation while satisfying technical and economical constraints. In the deregulated and competitive environment, large-scale integration of wind generation (WG) in power system has...... necessitated the inclusion of more innovative and sophisticated approaches in power system investment planning. A bi-level generation expansion planning approach considering large-scale wind generation was proposed in this paper. The first phase is investment decision, while the second phase is production...

  9. Large-Scale Inverse Problems and Quantification of Uncertainty

    CERN Document Server

    Biegler, Lorenz; Ghattas, Omar


    Large-scale inverse problems and associated uncertainty quantification has become an important area of research, central to a wide range of science and engineering applications. Written by leading experts in the field, Large-scale Inverse Problems and Quantification of Uncertainty focuses on the computational methods used to analyze and simulate inverse problems. The text provides PhD students, researchers, advanced undergraduate students, and engineering practitioners with the perspectives of researchers in areas of inverse problems and data assimilation, ranging from statistics and large-sca

  10. Fatigue Analysis of Large-scale Wind turbine

    Directory of Open Access Journals (Sweden)

    Zhu Yongli


    Full Text Available The paper does research on top flange fatigue damage of large-scale wind turbine generator. It establishes finite element model of top flange connection system with finite element analysis software MSC. Marc/Mentat, analyzes its fatigue strain, implements load simulation of flange fatigue working condition with Bladed software, acquires flange fatigue load spectrum with rain-flow counting method, finally, it realizes fatigue analysis of top flange with fatigue analysis software MSC. Fatigue and Palmgren-Miner linear cumulative damage theory. The analysis result indicates that its result provides new thinking for flange fatigue analysis of large-scale wind turbine generator, and possesses some practical engineering value.

  11. Large-Scale Spray Releases: Initial Aerosol Test Results

    Energy Technology Data Exchange (ETDEWEB)

    Schonewill, Philip P.; Gauglitz, Phillip A.; Bontha, Jagannadha R.; Daniel, Richard C.; Kurath, Dean E.; Adkins, Harold E.; Billing, Justin M.; Burns, Carolyn A.; Davis, James M.; Enderlin, Carl W.; Fischer, Christopher M.; Jenks, Jeromy WJ; Lukins, Craig D.; MacFarlan, Paul J.; Shutthanandan, Janani I.; Smith, Dennese M.


    One of the events postulated in the hazard analysis at the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids with Newtonian fluid behavior. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and across processing facilities in the DOE complex. Two key technical areas were identified where testing results were needed to improve the technical basis by reducing the uncertainty due to extrapolating existing literature results. The first technical need was to quantify the role of slurry particles in small breaches where the slurry particles may plug and result in substantially reduced, or even negligible, respirable fraction formed by high-pressure sprays. The second technical need was to determine the aerosol droplet size distribution and volume from prototypic breaches and fluids, specifically including sprays from larger breaches with slurries where data from the literature are scarce. To address these technical areas, small- and large-scale test stands were constructed and operated with simulants to determine aerosol release fractions and generation rates from a range of breach sizes and geometries. The properties of the simulants represented the range of properties expected in the WTP process streams and included water, sodium salt solutions, slurries containing boehmite or gibbsite, and a hazardous chemical simulant. The effect of anti-foam agents was assessed with most of the simulants. Orifices included round holes and

  12. Prevalence and characteristics of opioid-related deaths involving alcohol in Ontario, Canada

    NARCIS (Netherlands)

    Gomes, Tara; Juurlink, David N.; Mamdani, Muhammad M.; Paterson, J. Michael; van den Brink, Wim


    Background: While it is well known that patients receiving opioids should refrain from alcohol consumption, little is known about the involvement of alcohol in opioid-related deaths. Methods: We conducted a population-based analysis of opioid-related deaths in Ontario with and without alcohol

  13. New Visions for Large Scale Networks: Research and Applications (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This paper documents the findings of the March 12-14, 2001 Workshop on New Visions for Large-Scale Networks: Research and Applications. The workshops objectives were...

  14. Bottom-Up Accountability Initiatives and Large-Scale Land ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    The objective of this project is to test whether the Food and Agriculture Organization's Voluntary Guidelines on the Responsible Governance of Tenure of Land, Fisheries and Forests in the Context of National Food Security can help increase accountability for large-scale land acquisitions in Mali, Nigeria, Uganda, and South ...

  15. Mixing Metaphors: Building Infrastructure for Large Scale School Turnaround (United States)

    Peurach, Donald J.; Neumerski, Christine M.


    The purpose of this analysis is to increase understanding of the possibilities and challenges of building educational infrastructure--the basic, foundational structures, systems, and resources--to support large-scale school turnaround. Building educational infrastructure often exceeds the capacity of schools, districts, and state education…

  16. Interrogating Large-Scale Land Acquisitions and Their Implications ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Corey Piccioni

    Ghana: Two regions, the Eastern Region (banana plantation) and the Greater Accra Region. (pineapples and mango plantations). • Uganda: The research covered two sites—Amuru. (sugar plantation) and Mubende District (coffee plantation). Interrogating Large-Scale Land Acquisitions and Their. Implications for Women in ...

  17. The Large Scale Structure: Polarization Aspects R. F. Pizzo

    Indian Academy of Sciences (India)

    e-mail: Abstract. Polarized radio emission is detected at various scales in the ... are located at the nodes of the filamentary large-scale structure of the cosmic web and form by subsequent merging .... After calibration, we produced Q and U channel images for each dataset and we inspected them to remove ...

  18. Re-engineering non-flow large scale maintenance organisations

    NARCIS (Netherlands)

    de Waard, A.J.


    The research described in this dissertation deals with the organisational structure of non-flow Large Scale Maintenance Organisations (LSMO's). An LSMO is an enterprise of a considerable size (i.e. more than 150 employees) with as its primary tasks the maintenance, overhaul and modification of

  19. Planck intermediate results XLII. Large-scale Galactic magnetic fields

    DEFF Research Database (Denmark)

    Adam, R.; Ade, P. A. R.; Alves, M. I. R.


    Recent models for the large-scale Galactic magnetic fields in the literature have been largely constrained by synchrotron emission and Faraday rotation measures. We use three different but representative models to compare their predicted polarized synchrotron and dust emission with that measured ...

  20. Large scale synthesis and characterization of Ni nanoparticles by ...

    Indian Academy of Sciences (India)

    ... Refresher Courses · Symposia · Live Streaming. Home; Journals; Bulletin of Materials Science; Volume 31; Issue 1. Large scale synthesis and characterization of Ni nanoparticles by solution reduction method. Huazhi Wang Xinli Kou Jie Zhang Jiangong Li. Nanomaterials Volume 31 Issue 1 February 2008 pp 97-100 ...

  1. Temporal Variation of Large Scale Flows in the Solar Interior

    Indian Academy of Sciences (India)

    We attempt to detect short-term temporal variations in the rotation rate and other large scale velocity fields in the outer part of the solar convection zone using the ring diagram technique applied to Michelson Doppler Imager (MDI) data. The measured velocity field shows variations by about 10 m/s on the scale of few days.

  2. Temporal Variation of Large Scale Flows in the Solar Interior ...

    Indian Academy of Sciences (India)


    Abstract. We attempt to detect short term temporal variations in the rotation rate and other large scale velocity fields in the outer part of the solar convection zone using the ring diagram technique applied to. Michelson Doppler Imager (MDI) data. The measured velocity field shows variations by about 10 m/s on the scale of ...

  3. Extracting Useful Semantic Information from Large Scale Corpora of Text (United States)

    Mendoza, Ray Padilla, Jr.


    Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…

  4. Newton iterative methods for large scale nonlinear systems

    Energy Technology Data Exchange (ETDEWEB)

    Walker, H.F.; Turner, K.


    Objective is to develop robust, efficient Newton iterative methods for general large scale problems well suited for discretizations of partial differential equations, integral equations, and other continuous problems. A concomitant objective is to develop improved iterative linear algebra methods. We first outline research on Newton iterative methods and then review work on iterative linear algebra methods. (DLC)

  5. Evaluating Large-scale National Public Management Reforms

    DEFF Research Database (Denmark)

    Breidahl, Karen Nielsen; Gjelstrup, Gunnar; Hansen, Morten Balle

    This article explores differences and similarities between two evaluations of large-scale administrative reforms which were carried out in the 2000s: The evaluation of the Norwegian NAV reform (EVANAV) and the evaluation of the Danish Local Government Reform (LGR). We provide a comparative analysis...... of the similarities and differences between the evaluations of the two reforms and discuss their pros and cons....

  6. A Large-Scale Earth and Ocean Phenomenon

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 10; Issue 2. Tsunamis - A Large-Scale Earth and Ocean Phenomenon. Satish R Shetye. General Article Volume 10 Issue 2 February 2005 pp 8-19. Fulltext. Click here to view fulltext PDF. Permanent link:

  7. Large-Scale Innovation and Change in UK Higher Education (United States)

    Brown, Stephen


    This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ…

  8. Women's Land Rights and Working Conditions in Large-scale ...

    African Journals Online (AJOL)

    3, 2016, pp. 49-69. © Council for the Development of Social Science Research in Africa, 2017. (ISSN: 0850-3907). Women's Land Rights and Working. Conditions in Large-scale Plantations in Sub-Saharan Africa. Lotsmart Fonjong*. Abstract. Women's land rights are fundamental for women's economic empowerment.

  9. VESPA: Very large-scale Evolutionary and Selective Pressure Analyses

    Directory of Open Access Journals (Sweden)

    Andrew E. Webb


    Full Text Available Background Large-scale molecular evolutionary analyses of protein coding sequences requires a number of preparatory inter-related steps from finding gene families, to generating alignments and phylogenetic trees and assessing selective pressure variation. Each phase of these analyses can represent significant challenges, particularly when working with entire proteomes (all protein coding sequences in a genome from a large number of species. Methods We present VESPA, software capable of automating a selective pressure analysis using codeML in addition to the preparatory analyses and summary statistics. VESPA is written in python and Perl and is designed to run within a UNIX environment. Results We have benchmarked VESPA and our results show that the method is consistent, performs well on both large scale and smaller scale datasets, and produces results in line with previously published datasets. Discussion Large-scale gene family identification, sequence alignment, and phylogeny reconstruction are all important aspects of large-scale molecular evolutionary analyses. VESPA provides flexible software for simplifying these processes along with downstream selective pressure variation analyses. The software automatically interprets results from codeML and produces simplified summary files to assist the user in better understanding the results. VESPA may be found at the following website:

  10. Diffuse infrared emission of the galaxy: Large scale properties (United States)

    Perault, M.; Boulanger, F.; Falgarone, E.; Puget, J. L.


    The Infrared Astronomy Satellite (IRAS) survey is used to study large scale properties and the origin of the diffuse emission of the Galaxy. A careful subtraction of the zodiacal light enables longitude profiles of the galactic emission at 12, 25, 60, and 100 microns to be presented.

  11. Regeneration and propagation of reed grass for large-scale ...

    African Journals Online (AJOL)



    Jan 26, 2012 ... 3Bioenergy Crop Research Center, National Institute of Crop Science, RDA, Muan, Jeonnam 534-833, South Korea. 4Department of Horticulture, Daegu University, Jillyang, Gyeongsan 712-714, South Korea. Accepted 8 August, 2011. An in vitro culture system for the large-scale propagation of Phragmites ...

  12. Factors Influencing Uptake of a Large Scale Curriculum Innovation. (United States)

    Adey, Philip S.

    Educational research has all too often failed to be implemented on a large-scale basis. This paper describes the multiplier effect of a professional development program for teachers and for trainers in the United Kingdom, and how that program was developed, monitored, and evaluated. Cognitive Acceleration through Science Education (CASE) is a…

  13. Chain Analysis for large-scale Communication systems

    NARCIS (Netherlands)

    Grijpink, Jan|info:eu-repo/dai/nl/095130861


    The chain concept is introduced to explain how large-scale information infrastructures so often fail and sometimes even backfire. Next, the assessment framework of the doctrine of Chain-computerisation and its chain analysis procedure are outlined. In this procedure chain description precedes

  14. A variance-minimizing filter for large-scale applications

    NARCIS (Netherlands)

    Leeuwen, P.J. van

    A data-assimilation method is introduced for large-scale applications in the ocean and the atmosphere that does not rely on Gaussian assumptions, i.e. it is completely general following Bayes theorem. It is a so-called particle filter. A truly variance minimizing filter is introduced and its

  15. Large-Scale Assessments and Educational Policies in Italy (United States)

    Damiani, Valeria


    Despite Italy's extensive participation in most large-scale assessments, their actual influence on Italian educational policies is less easy to identify. The present contribution aims at highlighting and explaining reasons for the weak and often inconsistent relationship between international surveys and policy-making processes in Italy.…

  16. Large-scale disasters: prediction, control, and mitigation

    National Research Council Canada - National Science Library

    Gad-El-Hak, M


    ... an integrated review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling, and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of re...

  17. Small and large scale genomic DNA isolation protocol for chickpea ...

    African Journals Online (AJOL)

    Both small and large scale preparations were essentially suitable for PCR and Southern blot hybridization analyses, which are the key steps in crop improvement programme through marker development and genetic engineering techniques. Key words: Cicer arietinum L., phenolics, restriction enzyme digestion, PCR ...

  18. Large-Scale Machine Learning for Classification and Search (United States)

    Liu, Wei


    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  19. Dual Decomposition for Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Vandenberghe, Lieven


    Dual decomposition is applied to power balancing of exible thermal storage units. The centralized large-scale problem is decomposed into smaller subproblems and solved locallyby each unit in the Smart Grid. Convergence is achieved by coordinating the units consumption through a negotiation...

  20. Random walk models of large-scale structure

    Indian Academy of Sciences (India)

    Abstract. This paper describes the insights gained from the excursion set approach, in which vari- ous questions about the phenomenology of large-scale structure formation can be mapped to problems associated with the first crossing distribution of appropriately defined barriers by random walks. Much of this is ...

  1. Large scale solar district heating. Evaluation, modelling and designing - Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.


    The appendices present the following: A) Cad-drawing of the Marstal CSHP design. B) Key values - large-scale solar heating in Denmark. C) Monitoring - a system description. D) WMO-classification of pyranometers (solarimeters). E) The computer simulation model in TRNSYS. F) Selected papers from the author. (EHS)

  2. The Large Scale Magnetic Field and Sunspot Cycles

    Indian Academy of Sciences (India)


    J. Astrophys. Astr. (2000) 21, 161 162. The Large Scale Magnetic Field and Sunspot Cycles. V. I. Makarov* & A. G. Tlatov, Kislovodsk Solar Station of the Pulkovo Observatory,. Kislovodsk 357700, P.O. Box 145, Russia. *e mail: Key words. Sun: magnetic field—sunspots—solar cycle. Extended abstract.

  3. The interaction of large scale and mesoscale environment leading to ...

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Earth System Science; Volume 118; Issue 5. The interaction of large scale and mesoscale environment leading to formation of intense thunderstorms over Kolkata. Part I: Doppler radar and satellite observations. P Mukhopadhyay M Mahakur H A K Singh. Volume 118 Issue 5 October 2009 pp ...

  4. Optimization of large scale food production using Lean Manufacturing principles

    DEFF Research Database (Denmark)

    Engelund, Eva Høy; Friis, Alan; Breum, Gitte


    This paper discusses how the production principles of Lean Manufacturing (Lean) can be applied in a large-scale meal production. Lean principles are briefly presented, followed by a field study of how a kitchen at a Danish hospital has implemented Lean in the daily production. In the kitchen...

  5. The Development of Spatial Representations of Large-Scale Environments. (United States)

    Herman, James F.; Siegel, Alexander W.

    This experiment investigated the effect of children's successive encounters with a large scale environment on their subsequent reconstructions of that environment. Twenty children (10 boys, 10 girls) at each of three grade levels (kindergarten, two, and five) reconstructed from memory the spatial layout of buildings in a large model town. All…

  6. Entrepreneurial Employee Activity: A Large Scale International Study

    NARCIS (Netherlands)

    Bosma, N.S.; Stam, E.; Wennekers, S.

    This paper presents the results of the first large scale international comparative study of entrepreneurial employee activity (intrapreneurship). Intrapreneurship is a more wide-spread phenomenon in high income countries than in low income countries. At the organizational level, intrapreneurs have

  7. Large-Scale Assessment and English Language Learners with Disabilities (United States)

    Liu, Kristin K.; Ward, Jenna M.; Thurlow, Martha L.; Christensen, Laurene L.


    This article highlights a set of principles and guidelines, developed by a diverse group of specialists in the field, for appropriately including English language learners (ELLs) with disabilities in large-scale assessments. ELLs with disabilities make up roughly 9% of the rapidly increasing ELL population nationwide. In spite of the small overall…

  8. FMC cameras, high resolution films and very large scale mapping (United States)

    Tachibana, Kikuo; Hasegawa, Hiroyuki


    Very large scale mapping (1/250) was experimented on the basis of FMC camera, high resolution film and total station surveying. The future attractive combination of precision photogrammetry and personal computer assisted terrestrial surveying was investigated from the point of view of accuracy, time effectiveness and total procedures control.

  9. Large-scale land acquisitions in Africa | IDRC - International ...

    International Development Research Centre (IDRC) Digital Library (Canada)


    Aug 12, 2016 ... Early findings from research on large-scale land acquisitions in Africa point to five key cross-cutting issues: Uneven community impacts; Low levels of public awareness and participation; The need for fair compensation; Unclear and insecure tenure rights; Erosion of women's rights. This brief highlights five ...

  10. Large-scale ensemble averaging of ambulatory impedance cardiograms

    NARCIS (Netherlands)

    Riese, H; Groot, PFC; Van Den Berg, M; Kupper, NHM; Magnee, EHB; Rohaan, EJ; Vrijkotte, TGM; Willemsen, G; De Geus, EJC

    Impedance cardiography has been used increasingly to measure human physiological responses to emotional and mentally engaging stimuli. The validity of large-scale ensemble averaging of ambulatory impedance cardiograms was evaluated for preejection period (PEP), interbeat interval, and dZ/dt((min))

  11. Large-scale organizational change: an executive's guide

    National Research Council Canada - National Science Library

    Laszlo, Christopher; Laugel, Jean-François


    ... for managers who sail into the teeth of complexity and paradox." -Perry Pascarella, former editor-in-chief, Industry Week "Large-Scale Organizational Change opens new and promising avenues for leaders of large organizations. It shows how ecological sustainability itself changes the relationship between business and the outside world....

  12. A Chain Perspective on Large-scale Number Systems

    NARCIS (Netherlands)

    Grijpink, J.H.A.M.


    As large-scale number systems gain significance in social and economic life (electronic communication, remote electronic authentication), the correct functioning and the integrity of public number systems take on crucial importance. They are needed to uniquely indicate people, objects or phenomena

  13. Large-scale coastal impact induced by a catastrophic storm

    DEFF Research Database (Denmark)

    Fruergaard, Mikkel; Andersen, Thorbjørn Joest; Johannessen, Peter N

    breaching. Our results demonstrate that violent, millennial-scale storms can trigger significant large-scale and long-term changes on barrier coasts, and that coastal changes assumed to take place over centuries or even millennia may occur in association with a single extreme storm event....

  14. Kinematics of large scale asymmetric folds and associated smaller ...

    Indian Academy of Sciences (India)

    The first order structures in this belt are interpreted as large scale buckle folds above a subsurface decollement emphasizing the importance of detachment folding in thin skinned deformation of a sedimentary prism lying above a gneissic basement. That the folds have developed through fixed-hinge buckling is constrained ...

  15. Large Scale Anomalies of the Cosmic Microwave Background with Planck

    DEFF Research Database (Denmark)

    Frejsel, Anne Mette

    point-parity asymmetry. Here we find that a modified curvaton model can reproduce an asymmetric behavior of the power spectrum at lowmultipoles. The other approach is to uncoverwhether some of the large scale anomalies could have a common origin in residual contamination from the Galactic radio loops...

  16. Large-Scale Environmental Influences on Aquatic Animal Health (United States)

    In the latter portion of the 20th century, North America experienced numerous large-scale mortality events affecting a broad diversity of aquatic animals. Short-term forensic investigations of these events have sometimes characterized a causative agent or condition, but have rare...

  17. A large-scale perspective on stress-induced alterations in resting-state networks (United States)

    Maron-Katz, Adi; Vaisvaser, Sharon; Lin, Tamar; Hendler, Talma; Shamir, Ron


    Stress is known to induce large-scale neural modulations. However, its neural effect once the stressor is removed and how it relates to subjective experience are not fully understood. Here we used a statistically sound data-driven approach to investigate alterations in large-scale resting-state functional connectivity (rsFC) induced by acute social stress. We compared rsfMRI profiles of 57 healthy male subjects before and after stress induction. Using a parcellation-based univariate statistical analysis, we identified a large-scale rsFC change, involving 490 parcel-pairs. Aiming to characterize this change, we employed statistical enrichment analysis, identifying anatomic structures that were significantly interconnected by these pairs. This analysis revealed strengthening of thalamo-cortical connectivity and weakening of cross-hemispheral parieto-temporal connectivity. These alterations were further found to be associated with change in subjective stress reports. Integrating report-based information on stress sustainment 20 minutes post induction, revealed a single significant rsFC change between the right amygdala and the precuneus, which inversely correlated with the level of subjective recovery. Our study demonstrates the value of enrichment analysis for exploring large-scale network reorganization patterns, and provides new insight on stress-induced neural modulations and their relation to subjective experience.

  18. Engineering design for a large scale renewable energy network installation in an urban environment (United States)

    Mansouri Kouhestani, F.; Byrne, J. M.; Hazendonk, P.; Spencer, L.; Brown, M. B.


    Humanity's current avid consumption of resources cannot be maintained and the use of renewable energy is a significant approach towards sustainable energy future. Alberta is the largest greenhouse gas-producing province in Canada (per capita) and Climate change is expected to impact Alberta with warmer temperatures, intense floods, and earlier snow melting. However, as one of the sunniest and windiest places in Canada, Alberta is poised to become one of Canada's leader provinces in utilizing renewable energies. This research has four main objectives. First, to determine the feasibility of implementing solar and wind energy systems at the University of Lethbridge campus. Second, to quantify rooftop and parking lot solar photovoltaic potential for the city of Lethbridge. Third, to determine the available rooftop area for PV deployment in a large scale region (Province of Alberta). Forth, to investigate different strategies for correlating solar PV array production with electricity demand in the province of Alberta. The proposed work addresses the need for Alberta reductions to fossil fuel pollution that drives climate change, and degrades our air, water and land resources.

  19. The workshop on iterative methods for large scale nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Walker, H.F. [Utah State Univ., Logan, UT (United States). Dept. of Mathematics and Statistics; Pernice, M. [Univ. of Utah, Salt Lake City, UT (United States). Utah Supercomputing Inst.


    The aim of the workshop was to bring together researchers working on large scale applications with numerical specialists of various kinds. Applications that were addressed included reactive flows (combustion and other chemically reacting flows, tokamak modeling), porous media flows, cardiac modeling, chemical vapor deposition, image restoration, macromolecular modeling, and population dynamics. Numerical areas included Newton iterative (truncated Newton) methods, Krylov subspace methods, domain decomposition and other preconditioning methods, large scale optimization and optimal control, and parallel implementations and software. This report offers a brief summary of workshop activities and information about the participants. Interested readers are encouraged to look into an online proceedings available at In this, the material offered here is augmented with hypertext abstracts that include links to locations such as speakers` home pages, PostScript copies of talks and papers, cross-references to related talks, and other information about topics addresses at the workshop.

  20. First Mile Challenges for Large-Scale IoT

    KAUST Repository

    Bader, Ahmed


    The Internet of Things is large-scale by nature. This is not only manifested by the large number of connected devices, but also by the sheer scale of spatial traffic intensity that must be accommodated, primarily in the uplink direction. To that end, cellular networks are indeed a strong first mile candidate to accommodate the data tsunami to be generated by the IoT. However, IoT devices are required in the cellular paradigm to undergo random access procedures as a precursor to resource allocation. Such procedures impose a major bottleneck that hinders cellular networks\\' ability to support large-scale IoT. In this article, we shed light on the random access dilemma and present a case study based on experimental data as well as system-level simulations. Accordingly, a case is built for the latent need to revisit random access procedures. A call for action is motivated by listing a few potential remedies and recommendations.

  1. Large-scale innovation and change in UK higher education

    Directory of Open Access Journals (Sweden)

    Stephen Brown


    Full Text Available This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ technology to deliver such changes. Key lessons that emerged from these experiences are reviewed covering themes of pervasiveness, unofficial systems, project creep, opposition, pressure to deliver, personnel changes and technology issues. The paper argues that collaborative approaches to project management offer greater prospects of effective large-scale change in universities than either management-driven top-down or more champion-led bottom-up methods. It also argues that while some diminution of control over project outcomes is inherent in this approach, this is outweighed by potential benefits of lasting and widespread adoption of agreed changes.

  2. Efficient solution and analysis of large-scale goal programmes

    Energy Technology Data Exchange (ETDEWEB)

    Jones, D.; Tamiz, M.


    Goal Programming (GP) models are formed by the extension of Linear Programming (LP) to include multiple, possibly conflicting, objectives. The past two decades have seen a wealth of GP applications along with theoretical papers describing methods for the analysis and solution of GPs. With the application area of GP covering and extending that of LP, GP models the size of recent large-scale LP models have arisen. Such models may contain large numbers of objectives as well as constraints and variables. These models require the adaption of LP analysis techniques to account for their size and further specialized GP-speed-ups to lower solution time. This paper investigates efficient methods of pareto-efficiency analysis and restoration, normalization, redundancy checking, preference function modelling, and interactive method application, appropriate for large-scale GPs as well as suggesting a refined simplex procedure more appropriate for solution of GPs.

  3. Evaluation of Large-scale Public Sector Reforms

    DEFF Research Database (Denmark)

    Breidahl, Karen Nielsen; Gjelstrup, Gunnar; Hansen, Hanne Foss


    and more delimited policy areas take place. In our analysis we apply four governance perspectives (rational-instrumental, rational-interest based, institutional-cultural and a chaos perspective) in a comparative analysis of the evaluations of two large-scale public sector reforms in Denmark and Norway. We......Research on the evaluation of large-scale public sector reforms is rare. This article sets out to fill that gap in the evaluation literature and argues that it is of vital importance. The impact of such reforms is considerable. Furthermore they change the context in which evaluations of other...... compare the evaluation process (focus and purpose), the evaluators and the organization of the evaluation as well as the utilization of the evaluation results. The analysis uncovers several significant findings including how the initial organization of the evaluation show strong impact on the utilization...

  4. In the fast lane: large-scale bacterial genome engineering. (United States)

    Fehér, Tamás; Burland, Valerie; Pósfai, György


    The last few years have witnessed rapid progress in bacterial genome engineering. The long-established, standard ways of DNA synthesis, modification, transfer into living cells, and incorporation into genomes have given way to more effective, large-scale, robust genome modification protocols. Expansion of these engineering capabilities is due to several factors. Key advances include: (i) progress in oligonucleotide synthesis and in vitro and in vivo assembly methods, (ii) optimization of recombineering techniques, (iii) introduction of parallel, large-scale, combinatorial, and automated genome modification procedures, and (iv) rapid identification of the modifications by barcode-based analysis and sequencing. Combination of the brute force of these techniques with sophisticated bioinformatic design and modeling opens up new avenues for the analysis of gene functions and cellular network interactions, but also in engineering more effective producer strains. This review presents a summary of recent technological advances in bacterial genome engineering. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Efficient algorithms for collaborative decision making for large scale settings

    DEFF Research Database (Denmark)

    Assent, Ira


    Collaborative decision making is a successful approach in settings where data analysis and querying can be done interactively. In large scale systems with huge data volumes or many users, collaboration is often hindered by impractical runtimes. Existing work on improving collaboration focuses......, and focuses on improving runtimes regardless of where the queries are issued from. In this work, we claim that progress can be made by taking a novel, more holistic view of the problem. We discuss a new approach that combines the two strands of research on the user experience and query engine parts in order...... to bring about more effective and more efficient retrieval systems that support the users' decision making process. We sketch promising research directions for more efficient algorithms for collaborative decision making, especially for large scale systems....

  6. Performance of Grey Wolf Optimizer on large scale problems (United States)

    Gupta, Shubham; Deep, Kusum


    For solving nonlinear continuous problems of optimization numerous nature inspired optimization techniques are being proposed in literature which can be implemented to solve real life problems wherein the conventional techniques cannot be applied. Grey Wolf Optimizer is one of such technique which is gaining popularity since the last two years. The objective of this paper is to investigate the performance of Grey Wolf Optimization Algorithm on large scale optimization problems. The Algorithm is implemented on 5 common scalable problems appearing in literature namely Sphere, Rosenbrock, Rastrigin, Ackley and Griewank Functions. The dimensions of these problems are varied from 50 to 1000. The results indicate that Grey Wolf Optimizer is a powerful nature inspired Optimization Algorithm for large scale problems, except Rosenbrock which is a unimodal function.

  7. Instrumentation Development for Large Scale Hypersonic Inflatable Aerodynamic Decelerator Characterization (United States)

    Swanson, Gregory T.; Cassell, Alan M.


    Hypersonic Inflatable Aerodynamic Decelerator (HIAD) technology is currently being considered for multiple atmospheric entry applications as the limitations of traditional entry vehicles have been reached. The Inflatable Re-entry Vehicle Experiment (IRVE) has successfully demonstrated this technology as a viable candidate with a 3.0 m diameter vehicle sub-orbital flight. To further this technology, large scale HIADs (6.0 8.5 m) must be developed and tested. To characterize the performance of large scale HIAD technology new instrumentation concepts must be developed to accommodate the flexible nature inflatable aeroshell. Many of the concepts that are under consideration for the HIAD FY12 subsonic wind tunnel test series are discussed below.

  8. The Phoenix series large scale LNG pool fire experiments.

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.


    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  9. Large scale 2D spectral compressed sensing in continuous domain

    KAUST Repository

    Cai, Jian-Feng


    We consider the problem of spectral compressed sensing in continuous domain, which aims to recover a 2-dimensional spectrally sparse signal from partially observed time samples. The signal is assumed to be a superposition of s complex sinusoids. We propose a semidefinite program for the 2D signal recovery problem. Our model is able to handle large scale 2D signals of size 500 × 500, whereas traditional approaches only handle signals of size around 20 × 20.

  10. The Cambridge CFD Grid for Large Scale Distributed CFD Applications


    Yang, Xiaobo; Hayes, Mark; Jenkins, K.; Cant, Stewart R.


    A revised version submitted for publication in the Elsevier Journal of Future Generation Computer Systems: promotional issue on Grid Computing, originally appeared in the Proceedings of ICCS 2004, Krakow, Poland, June 2004 The Cambridge CFD (computational fluid dynamics) Grid is a distributed problem solving environment for large-scale CFD applications set up between the Cambridge eScience Centre and the CFD Lab in the Engineering Department at the University of Cambridge. A Web portal, th...

  11. Foundations of Large-Scale Multimedia Information Management and Retrieval

    CERN Document Server

    Chang, Edward Y


    "Foundations of Large-Scale Multimedia Information Management and Retrieval - Mathematics of Perception" covers knowledge representation and semantic analysis of multimedia data and scalability in signal extraction, data mining, and indexing. The book is divided into two parts: Part I - Knowledge Representation and Semantic Analysis focuses on the key components of mathematics of perception as it applies to data management and retrieval. These include feature selection/reduction, knowledge representation, semantic analysis, distance function formulation for measuring similarity, and

  12. A Reconfigurable Fabric for Accelerating Large-Scale Datacenter Services


    Putnam, Andrew; Caulfield, Adrian M.; Chung, Eric S.; Chiou, Derek; Constantinides, Kypros; Demme, John; Esmaeilzadeh, Hadi; Fowers, Jeremy; Gopal, Gopi Prashanth; Gray, Jan; Haselman, Michael; Hauck, Scott; Heil, Stephen; Hormati, Amir; Kim, Joo-Young


    Datacenter workloads demand high computational capabilities, flexibility, power efficiency, and low cost. It is challenging to improve all of these factors simultaneously. To advance datacenter capabilities beyond what commodity server designs can provide, we have designed and built a composable, reconfigurable fabric to accelerate portions of large-scale software services. Each instantiation of the fabric consists of a 6×8 2-D torus of high-end Stratix V FPGAs embedded into a half-rack of 48...

  13. Experimental simulation of microinteractions in large scale explosions

    Energy Technology Data Exchange (ETDEWEB)

    Chen, X.; Luo, R.; Yuen, W.W.; Theofanous, T.G. [California Univ., Santa Barbara, CA (United States). Center for Risk Studies and Safety


    This paper presents data and analysis of recent experiments conducted in the SIGMA-2000 facility to simulate microinteractions in large scale explosions. Specifically, the fragmentation behavior of a high temperature molten steel drop under high pressure (beyond critical) conditions are investigated. The current data demonstrate, for the first time, the effect of high pressure in suppressing the thermal effect of fragmentation under supercritical conditions. The results support the microinteractions idea, and the ESPROSE.m prediction of fragmentation rate. (author)

  14. On the Phenomenology of an Accelerated Large-Scale Universe


    Martiros Khurshudyan


    In this review paper, several new results towards the explanation of the accelerated expansion of the large-scale universe is discussed. On the other hand, inflation is the early-time accelerated era and the universe is symmetric in the sense of accelerated expansion. The accelerated expansion of is one of the long standing problems in modern cosmology, and physics in general. There are several well defined approaches to solve this problem. One of them is an assumption concerning the existenc...

  15. A Large-Scale Study of Online Shopping Behavior


    Nalchigar, Soroosh; Weber, Ingmar


    The continuous growth of electronic commerce has stimulated great interest in studying online consumer behavior. Given the significant growth in online shopping, better understanding of customers allows better marketing strategies to be designed. While studies of online shopping attitude are widespread in the literature, studies of browsing habits differences in relation to online shopping are scarce. This research performs a large scale study of the relationship between Internet browsing hab...

  16. Large Scale Evolution of Convolutional Neural Networks Using Volunteer Computing


    Desell, Travis


    This work presents a new algorithm called evolutionary exploration of augmenting convolutional topologies (EXACT), which is capable of evolving the structure of convolutional neural networks (CNNs). EXACT is in part modeled after the neuroevolution of augmenting topologies (NEAT) algorithm, with notable exceptions to allow it to scale to large scale distributed computing environments and evolve networks with convolutional filters. In addition to multithreaded and MPI versions, EXACT has been ...

  17. Robust regression for large-scale neuroimaging studies.


    Bokde, Arun


    PUBLISHED Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypot...

  18. Computational tools for large-scale biological network analysis


    Pinto, José Pedro Basto Gouveia Pereira


    Tese de doutoramento em Informática The surge of the field of Bioinformatics, among other contributions, provided biological researchers with powerful computational methods for processing and analysing the large amount of data coming from recent biological experimental techniques such as genome sequencing and other omics. Naturally, this led to the opening of new avenues of biological research among which is included the analysis of large-scale biological networks. The an...

  19. Large-Scale Integrated Carbon Nanotube Gas Sensors


    Joondong Kim


    Carbon nanotube (CNT) is a promising one-dimensional nanostructure for various nanoscale electronics. Additionally, nanostructures would provide a significant large surface area at a fixed volume, which is an advantage for high-responsive gas sensors. However, the difficulty in fabrication processes limits the CNT gas sensors for the large-scale production. We review the viable scheme for large-area application including the CNT gas sensor fabrication and reaction mechanism with a practical d...

  20. Computing Sampling Weights in Large-scale Assessments in Education


    Meinck, Sabine


    Sampling weights are a reflection of sampling design; they allow us to draw valid conclusions about population features from sample data. This paper explains the fundamentals of computing sampling weights for large-scale assessments in educational research. The relationship between the nature of complex samples and best practices in developing a set of weights to enable computation of unbiased population estimates is described. Effects of sampling weights on estimates are shown...

  1. Large-Scale Integrated Carbon Nanotube Gas Sensors

    Directory of Open Access Journals (Sweden)

    Joondong Kim


    Full Text Available Carbon nanotube (CNT is a promising one-dimensional nanostructure for various nanoscale electronics. Additionally, nanostructures would provide a significant large surface area at a fixed volume, which is an advantage for high-responsive gas sensors. However, the difficulty in fabrication processes limits the CNT gas sensors for the large-scale production. We review the viable scheme for large-area application including the CNT gas sensor fabrication and reaction mechanism with a practical demonstration.

  2. PKI security in large-scale healthcare networks


    Mantas, G.; Lymberopoulos, D.; Komninos, N.


    During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a ...

  3. Battlespace Awareness: Heterogeneous Sensor Maps of Large Scale, Complex Environments (United States)


    Simple Way to Prevent Neural Networks from Overfitting.” In: Journal of Machine Learning Research 15.1 (2014), pp. 1929–1958. [133] Art B Owen. “A robust...Awareness: Heterogeneous Sensor Maps of Large-Scale, Complex Sb. GRANT NUMBER Environments Sc. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Sd. PROJECT NUMBER... environment . Until recently, these maps were restricted to sparse, 2D representations due to computational, memory, and sensor limitations. With the

  4. A manager's view on large scale XP projects


    Rumpe, Bernhard; Scholz, Peter


    XP is a code oriented, light weight software engineering methodology, suited merely for small sized teams who develop software that relies on vague or rapidly changing requirements. Being very code oriented, the discipline of systems engineering knows it as approach of incremental system change. In this contribution, we discuss the enhanced version of a concept on how to extend XP on large scale projects with hundreds of software engineers and programmers, respectively. A previous version was...

  5. Concurrent Programming Using Actors: Exploiting Large-Scale Parallelism, (United States)



  6. An Evaluation Framework for Large-Scale Network Structures

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Knudsen, Thomas Phillip; Madsen, Ole Brun


    An evaluation framework for large-scale network structures is presented, which facilitates evaluations and comparisons of different physical network structures. A number of quantitative and qualitative parameters are presented, and their importance to networks discussed. Choosing a network...... is closed by an example of how the framework can be used. The framework supports network planners in decision-making and researchers in evaluation and development of network structures....

  7. Large Scale Solar Heating:Evaluation, Modelling and Designing


    Heller, Alfred; Svendsen, Svend; Furbo, Simon


    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out based on measurements on the Marstal plant, Denmark, and through comparison with published and unpublished data from other plants. Evaluations on the thermal, economical and environmental performance...

  8. Accelerating large-scale phase-field simulations with GPU (United States)

    Shi, Xiaoming; Huang, Houbing; Cao, Guoping; Ma, Xingqiao


    A new package for accelerating large-scale phase-field simulations was developed by using GPU based on the semi-implicit Fourier method. The package can solve a variety of equilibrium equations with different inhomogeneity including long-range elastic, magnetostatic, and electrostatic interactions. Through using specific algorithm in Compute Unified Device Architecture (CUDA), Fourier spectral iterative perturbation method was integrated in GPU package. The Allen-Cahn equation, Cahn-Hilliard equation, and phase-field model with long-range interaction were solved based on the algorithm running on GPU respectively to test the performance of the package. From the comparison of the calculation results between the solver executed in single CPU and the one on GPU, it was found that the speed on GPU is enormously elevated to 50 times faster. The present study therefore contributes to the acceleration of large-scale phase-field simulations and provides guidance for experiments to design large-scale functional devices.

  9. Geospatial Optimization of Siting Large-Scale Solar Projects

    Energy Technology Data Exchange (ETDEWEB)

    Macknick, Jordan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Quinby, Ted [National Renewable Energy Lab. (NREL), Golden, CO (United States); Caulfield, Emmet [Stanford Univ., CA (United States); Gerritsen, Margot [Stanford Univ., CA (United States); Diffendorfer, Jay [U.S. Geological Survey, Boulder, CO (United States); Haines, Seth [U.S. Geological Survey, Boulder, CO (United States)


    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  10. Image-based Exploration of Large-Scale Pathline Fields

    KAUST Repository

    Nagoor, Omniah H.


    While real-time applications are nowadays routinely used in visualizing large nu- merical simulations and volumes, handling these large-scale datasets requires high-end graphics clusters or supercomputers to process and visualize them. However, not all users have access to powerful clusters. Therefore, it is challenging to come up with a visualization approach that provides insight to large-scale datasets on a single com- puter. Explorable images (EI) is one of the methods that allows users to handle large data on a single workstation. Although it is a view-dependent method, it combines both exploration and modification of visual aspects without re-accessing the original huge data. In this thesis, we propose a novel image-based method that applies the concept of EI in visualizing large flow-field pathlines data. The goal of our work is to provide an optimized image-based method, which scales well with the dataset size. Our approach is based on constructing a per-pixel linked list data structure in which each pixel contains a list of pathlines segments. With this view-dependent method it is possible to filter, color-code and explore large-scale flow data in real-time. In addition, optimization techniques such as early-ray termination and deferred shading are applied, which further improves the performance and scalability of our approach.

  11. Robust regression for large-scale neuroimaging studies. (United States)

    Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand


    Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Large-Scale Atmospheric Step-and-Repeat UV Nanoimprinting

    Directory of Open Access Journals (Sweden)

    Kentaro Ishibashi


    Full Text Available Step-and-repeat UV nanoimprinting for large-scale nanostructure fabrication under atmospheric pressure was realized using high-viscosity photocurable resin and a simple nanoimprinting system. In step-and-repeat UV nanoimprinting under atmospheric pressure using low-viscosity resin, large-scale nanostructure fabrication is very difficult, due to bubble defects and nonuniformity of the residual layer. To minimize bubble defects and nonuniformity of the residual layer, we focused on the damping effects of photocurable resin viscosity. Fabrication of 165 dies was successfully demonstrated in a 130×130 mm2 area on an 8 in silicon substrate by step-and-repeat UV nanoimprinting under atmospheric pressure using high-viscosity photocurable resin. Nanostructures with widths and spacing patterns from 80 nm to 3 μm and 200 nm depth were formed using a quartz mold. Bubble defects were not observed, and residual layer uniformity was within 30 nm ±10%. This study reports on simple step-and-repeat UV nanoimprinting under atmospheric pressure using high-viscosity photocurable resin, as a very widely available method for large-scale mass production of nanostructures.

  13. BILGO: Bilateral greedy optimization for large scale semidefinite programming

    KAUST Repository

    Hao, Zhifeng


    Many machine learning tasks (e.g. metric and manifold learning problems) can be formulated as convex semidefinite programs. To enable the application of these tasks on a large-scale, scalability and computational efficiency are considered as desirable properties for a practical semidefinite programming algorithm. In this paper, we theoretically analyze a new bilateral greedy optimization (denoted BILGO) strategy in solving general semidefinite programs on large-scale datasets. As compared to existing methods, BILGO employs a bilateral search strategy during each optimization iteration. In such an iteration, the current semidefinite matrix solution is updated as a bilateral linear combination of the previous solution and a suitable rank-1 matrix, which can be efficiently computed from the leading eigenvector of the descent direction at this iteration. By optimizing for the coefficients of the bilateral combination, BILGO reduces the cost function in every iteration until the KKT conditions are fully satisfied, thus, it tends to converge to a global optimum. In fact, we prove that BILGO converges to the global optimal solution at a rate of O(1/k), where k is the iteration counter. The algorithm thus successfully combines the efficiency of conventional rank-1 update algorithms and the effectiveness of gradient descent. Moreover, BILGO can be easily extended to handle low rank constraints. To validate the effectiveness and efficiency of BILGO, we apply it to two important machine learning tasks, namely Mahalanobis metric learning and maximum variance unfolding. Extensive experimental results clearly demonstrate that BILGO can solve large-scale semidefinite programs efficiently.

  14. Accelerating large-scale phase-field simulations with GPU

    Directory of Open Access Journals (Sweden)

    Xiaoming Shi


    Full Text Available A new package for accelerating large-scale phase-field simulations was developed by using GPU based on the semi-implicit Fourier method. The package can solve a variety of equilibrium equations with different inhomogeneity including long-range elastic, magnetostatic, and electrostatic interactions. Through using specific algorithm in Compute Unified Device Architecture (CUDA, Fourier spectral iterative perturbation method was integrated in GPU package. The Allen-Cahn equation, Cahn-Hilliard equation, and phase-field model with long-range interaction were solved based on the algorithm running on GPU respectively to test the performance of the package. From the comparison of the calculation results between the solver executed in single CPU and the one on GPU, it was found that the speed on GPU is enormously elevated to 50 times faster. The present study therefore contributes to the acceleration of large-scale phase-field simulations and provides guidance for experiments to design large-scale functional devices.

  15. Utilization of Large Scale Surface Models for Detailed Visibility Analyses (United States)

    Caha, J.; Kačmařík, M.


    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  16. Dualities and dilemmas::Contending with uncertainty in large-scale safety-critical projects


    Saunders, Fiona; Sherry, Andrew; Gale, Andrew


    Uncertainty is a fact of project life. Most decisions that are made on a safety-critical project involve uncertainty, the consequences of which may be highly significant to the safe and timely delivery of the project. Based on interviews with project management practitioners on nine large-scale civil nuclear and aerospace projects, we explore how uncertainty emerges, and how project management practitioners identify, analyse and act on it. We make three important contributions. First, we pres...

  17. Big Graphics and Little Screens: Model-Based Design of Large Scale Information Displays (United States)


    The design of large scale information displays is addressed. Problems with traditional approaches to display design are discussed. It is argued that...the evolving nature of humans’ roles in complex systems will exacerbate these problems. A model-based framework for display design is proposed...involving system models, task models, and humans’ models of systems and tasks. This framework provide a basis for exploring three types of display design problems

  18. The multilevel fast multipole algorithm (MLFMA) for solving large-scale computational electromagnetics problems

    CERN Document Server

    Ergul, Ozgur


    The Multilevel Fast Multipole Algorithm (MLFMA) for Solving Large-Scale Computational Electromagnetic Problems provides a detailed and instructional overview of implementing MLFMA. The book: Presents a comprehensive treatment of the MLFMA algorithm, including basic linear algebra concepts, recent developments on the parallel computation, and a number of application examplesCovers solutions of electromagnetic problems involving dielectric objects and perfectly-conducting objectsDiscusses applications including scattering from airborne targets, scattering from red

  19. Methamphetamine injecting is associated with phylogenetic clustering of hepatitis C virus infection among street-involved youth in Vancouver, Canada* (United States)

    Cunningham, Evan; Jacka, Brendan; DeBeck, Kora; Applegate, Tanya A; Harrigan, P. Richard; Krajden, Mel; Marshall, Brandon DL; Montaner, Julio; Lima, Viviane Dias; Olmstead, Andrea; Milloy, M-J; Wood, Evan; Grebely, Jason


    Background Among prospective cohorts of people who inject drugs (PWID), phylogenetic clustering of HCV infection has been observed. However, the majority of studies have included older PWID, representing distant transmission events. The aim of this study was to investigate phylogenetic clustering of HCV infection among a cohort of street-involved youth. Methods Data were derived from a prospective cohort of street-involved youth aged 14–26 recruited between 2005 and 2012 in Vancouver, Canada (At Risk Youth Study, ARYS). HCV RNA testing and sequencing (Core-E2) were performed on HCV positive participants. Phylogenetic trees were inferred using maximum likelihood methods and clusters were identified using ClusterPicker (Core-E2 without HVR1, 90% bootstrap threshold, 0.05 genetic distance threshold). Results Among 945 individuals enrolled in ARYS, 16% (n=149, 100% recent injectors) were HCV antibody positive at baseline interview (n=86) or seroconverted during follow-up (n=63). Among HCV antibody positive participants with available samples (n=131), 75% (n=98) had detectable HCV RNA and 66% (n=65, mean age 23, 58% with recent methamphetamine injection, 31% female, 3% HIV+) had available Core-E2 sequences. Of those with Core-E2 sequence, 14% (n=9) were in a cluster (one cluster of three) or pair (two pairs), with all reporting recent methamphetamine injection. Recent methamphetamine injection was associated with membership in a cluster or pair (P=0.009). Conclusion In this study of street-involved youth with HCV infection and recent injecting, 14% demonstrated phylogenetic clustering. Phylogenetic clustering was associated with recent methamphetamine injection, suggesting that methamphetamine drug injection may play an important role in networks of HCV transmission. PMID:25977204

  20. Asian-Canadian children and families involved in the child welfare system in Canada: A mixed methods study. (United States)

    Lee, Barbara; Fuller-Thomson, Esme; Fallon, Barbara; Trocmé, Nico; Black, Tara


    The purpose of the study is to understand the similarities and differences in child welfare involvement for Asian- Canadian (East and Southeast Asian) versus White-Canadian children and families involved in the child welfare system in Canada, and to consider the implications and recommendations for service. This mixed methods study began by replicating this author's previous study that found significant differences in the case characteristics and services used by Asian compared to non-Asian families in the child welfare system. The present study used a mixed method approach to further build a comprehensive descriptive understanding of Asian-Canadian children and families involved in the child welfare system at national and local levels. Secondary data analysis of the 2008 Canadian Incidence Study of Reported Child Abuse and Neglect (CIS-2008) was conducted to identify the case characteristics (such as referral source, investigation type, and primary maltreatment type) and short-term service outcome (such as substantiation decision and decision to transfer to ongoing child protection services) of child maltreatment investigations involving Asian-Canadian children and families in the child welfare system. The results were presented to focus group participants in a workshop, and a semi-structured interview guide was used to document child welfare workers' experience with and perception of Asian-Canadian service users. The results indicated substantial differences between Asian- Canadian and White-Canadian children and families investigated by child welfare agencies in respect to the household composition, maltreatment type, substantiation decision and decision to transfer to ongoing child protection services. Child welfare workers validated the results from secondary data analysis of the CIS-2008 and offer a broader cultural and structural context for understanding child welfare involvement with Asian-Canadians. Asian-Canadian children and families bring a diversity

  1. Robust large-scale parallel nonlinear solvers for simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)


    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any

  2. Large Scale Observatories for Changing Cold Regions - Recent Progress and Future Vision (United States)

    Wheater, H. S.; Pomeroy, J. W.; Carey, S. K.; DeBeer, C. M.


    Observatories are at the core of hydrological science and a critical resource for the detection and analysis of environmental change. The combination of multiple pressures on the water environment and new scientific opportunities provides a context where a broader vision is urgently needed. Human activities are increasingly affecting land and water management at multiple scales, so our observatories now need to more fully include the human dimensions of water, including their integration across jurisdictional boundaries and at large basin scales. And large scales are also needed to diagnose and predict impacts of climate change at regional and continental scales, and to address land-water-atmosphere interactions and feedbacks. We argue the need to build on the notable past successes of the World Climate Research Programme and move forward to a new era of globally-distributed large scale observatories. This paper introduces 2 such observatories in rapidly warming western Canada - the 405,000 km2 Saskatchewan and the 1.8 million km2 Mackenzie river basins. We review progress in these multi-scale observatories, including the use of point and small basin-scale observatory sites to observe and diagnose complex regional patterns of hydrological change. And building on new opportunities for observational systems and data assimilation, we present a vision for a pan-Canadian observing system to support the science needed for the management of future societal risk from extreme events and environmental change.

  3. Wavelet analysis of precipitation extremes over Canadian ecoregions and teleconnections to large-scale climate anomalies (United States)

    Tan, Xuezhi; Gan, Thian Yew; Shao, Dongguo


    To detect significant interannual and interdecadal oscillations and their teleconnections to large-scale climate anomalies such as El Niño-Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), and North Atlantic Oscillation (NAO), monthly and seasonal maximum daily precipitation (MMDP and SMDP) from 131 stations across Canada were analyzed by using variants of wavelet analysis. Interannual (1-8 years) oscillations were found to be more significant than interdecadal (8-30 years) oscillations for all selected stations, and the oscillations are both spatial and time-dependent. Similarly, the significant wavelet coherence and the phase difference between leading principal components of monthly precipitation extremes and climate indices were highly variable in time and in periodicity, and a single climate index explains less than 40% of the total variability. Partial wavelet coherence analysis shows that both ENSO and PDO modulated the interannual variability and PDO modulated the interdecadal variability, of MMDP over Canada. NAO is correlated with the western MMDP at interdecadal scale and the eastern MMDP at interannual scale. The composite analysis shows that precipitation extremes at about three fourths of the stations have been significantly influenced by ENSO and PDO patterns, while about one half of the stations by the NAO patterns. The magnitude of SMDP in extreme El Niño years, and extreme PDO event of positive phase, was mostly lower (higher) over the Canadian Prairies in summer and winter (spring and autumn) than in extreme La Niña years. Overall, the degree of influence of large-scale climate patterns on Canadian precipitation extremes varies by season and by region.

  4. Large-Scale Traveling Weather Systems in Mars’ Southern Extratropics (United States)

    Hollingsworth, Jeffery L.; Kahre, Melinda A.


    Between late fall and early spring, Mars’ middle- and high-latitude atmosphere supports strong mean equator-to-pole temperature contrasts and an accompanying mean westerly polar vortex. Observations from both the MGS Thermal Emission Spectrometer (TES) and the MRO Mars Climate Sounder (MCS) indicate that a mean baroclinicity-barotropicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). Such extratropical weather disturbances are critical components of the global circulation as they serve as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of such traveling extratropical synoptic disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively-lifted and radiatively-active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to the northern-hemisphere counterparts, the southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are investigated, in addition to large-scale up-slope/down-slope flows and the diurnal cycle. A southern storm zone in late winter and early spring presents in the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre and Hellas impact basins. Geographically localized transient-wave activity diagnostics are constructed that illuminate dynamical differences amongst the simulations and these are presented.

  5. Large-Scale Traveling Weather Systems in Mars Southern Extratropics (United States)

    Hollingsworth, Jeffery L.; Kahre, Melinda A.


    Between late fall and early spring, Mars' middle- and high-latitude atmosphere supports strong mean equator-to-pole temperature contrasts and an accompanying mean westerly polar vortex. Observations from both the MGS Thermal Emission Spectrometer (TES) and the MRO Mars Climate Sounder (MCS) indicate that a mean baroclinicity-barotropicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). Such extratropical weather disturbances are critical components of the global circulation as they serve as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of such traveling extratropical synoptic disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively-lifted and radiatively-active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to the northern-hemisphere counterparts, the southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are investigated, in addition to large-scale up-slope/down-slope flows and the diurnal cycle. A southern storm zone in late winter and early spring presents in the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre and Hellas impact basins. Geographically localized transient-wave activity diagnostics are constructed that illuminate dynamical differences amongst the simulations and these are presented.


    Energy Technology Data Exchange (ETDEWEB)

    Hamid Sarv


    Technical and economical feasibility of large-scale CO{sub 2} transportation and ocean sequestration at depths of 3000 meters or grater was investigated. Two options were examined for transporting and disposing the captured CO{sub 2}. In one case, CO{sub 2} was pumped from a land-based collection center through long pipelines laid on the ocean floor. Another case considered oceanic tanker transport of liquid carbon dioxide to an offshore floating structure for vertical injection to the ocean floor. In the latter case, a novel concept based on subsurface towing of a 3000-meter pipe, and attaching it to the offshore structure was considered. Budgetary cost estimates indicate that for distances greater than 400 km, tanker transportation and offshore injection through a 3000-meter vertical pipe provides the best method for delivering liquid CO{sub 2} to deep ocean floor depressions. For shorter distances, CO{sub 2} delivery by parallel-laid, subsea pipelines is more cost-effective. Estimated costs for 500-km transport and storage at a depth of 3000 meters by subsea pipelines and tankers were 1.5 and 1.4 dollars per ton of stored CO{sub 2}, respectively. At these prices, economics of ocean disposal are highly favorable. Future work should focus on addressing technical issues that are critical to the deployment of a large-scale CO{sub 2} transportation and disposal system. Pipe corrosion, structural design of the transport pipe, and dispersion characteristics of sinking CO{sub 2} effluent plumes have been identified as areas that require further attention. Our planned activities in the next Phase include laboratory-scale corrosion testing, structural analysis of the pipeline, analytical and experimental simulations of CO{sub 2} discharge and dispersion, and the conceptual economic and engineering evaluation of large-scale implementation.

  7. Predicting protein functions from redundancies in large-scale protein interaction networks (United States)

    Samanta, Manoj Pratim; Liang, Shoudan


    Interpreting data from large-scale protein interaction experiments has been a challenging task because of the widespread presence of random false positives. Here, we present a network-based statistical algorithm that overcomes this difficulty and allows us to derive functions of unannotated proteins from large-scale interaction data. Our algorithm uses the insight that if two proteins share significantly larger number of common interaction partners than random, they have close functional associations. Analysis of publicly available data from Saccharomyces cerevisiae reveals >2,800 reliable functional associations, 29% of which involve at least one unannotated protein. By further analyzing these associations, we derive tentative functions for 81 unannotated proteins with high certainty. Our method is not overly sensitive to the false positives present in the data. Even after adding 50% randomly generated interactions to the measured data set, we are able to recover almost all (approximately 89%) of the original associations.

  8. Do rabbits eat voles? Apparent competition, habitat heterogeneity and large-scale coexistence under mink predation. (United States)

    Oliver, Matthew; Luque-Larena, Juan José; Lambin, Xavier


    Habitat heterogeneity is predicted to profoundly influence the dynamics of indirect interspecific interactions; however, despite potentially significant consequences for multi-species persistence, this remains almost completely unexplored in large-scale natural landscapes. Moreover, how spatial habitat heterogeneity affects the persistence of interacting invasive and native species is also poorly understood. Here we show how the persistence of a native prey (water vole, Arvicola terrestris) is determined by the spatial distribution of an invasive prey (European rabbit, Oryctolagus cuniculus) and directly infer how this is defined by the mobility of a shared invasive predator (American mink, Neovison vison). This study uniquely demonstrates that variation in habitat connectivity in large-scale natural landscapes creates spatial asynchrony, enabling coexistence between apparent competitive native and invasive species. These findings highlight that unexpected interactions may be involved in species declines, and also that in such cases habitat heterogeneity should be considered in wildlife management decisions.

  9. Highly Scalable Trip Grouping for Large Scale Collective Transportation Systems

    DEFF Research Database (Denmark)

    Gidofalvi, Gyozo; Pedersen, Torben Bach; Risch, Tore


    Transportation-related problems, like road congestion, parking, and pollution, are increasing in most cities. In order to reduce traffic, recent work has proposed methods for vehicle sharing, for example for sharing cabs by grouping "closeby" cab requests and thus minimizing transportation cost...... and utilizing cab space. However, the methods published so far do not scale to large data volumes, which is necessary to facilitate large-scale collective transportation systems, e.g., ride-sharing systems for large cities. This paper presents highly scalable trip grouping algorithms, which generalize previous...

  10. Less is more: regularization perspectives on large scale machine learning

    CERN Multimedia

    CERN. Geneva


    Deep learning based techniques provide a possible solution at the expanse of theoretical guidance and, especially, of computational requirements. It is then a key challenge for large scale machine learning to devise approaches guaranteed to be accurate and yet computationally efficient. In this talk, we will consider a regularization perspectives on machine learning appealing to classical ideas in linear algebra and inverse problems to scale-up dramatically nonparametric methods such as kernel methods, often dismissed because of prohibitive costs. Our analysis derives optimal theoretical guarantees while providing experimental results at par or out-performing state of the art approaches.

  11. Enabling Large-Scale Biomedical Analysis in the Cloud

    Directory of Open Access Journals (Sweden)

    Ying-Chih Lin


    Full Text Available Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable.

  12. Facilitating dynamo action via control of large-scale turbulence. (United States)

    Limone, A; Hatch, D R; Forest, C B; Jenko, F


    The magnetohydrodynamic dynamo effect is considered to be the major cause of magnetic field generation in geo- and astrophysical systems. Recent experimental and numerical results show that turbulence constitutes an obstacle to dynamos; yet its role in this context is not totally clear. Via numerical simulations, we identify large-scale turbulent vortices with a detrimental effect on the amplification of the magnetic field in a geometry of experimental interest and propose a strategy for facilitating the dynamo instability by manipulating these detrimental "hidden" dynamics.

  13. Large-Scale Graph Processing Using Apache Giraph

    KAUST Repository

    Sakr, Sherif


    This book takes its reader on a journey through Apache Giraph, a popular distributed graph processing platform designed to bring the power of big data processing to graph data. Designed as a step-by-step self-study guide for everyone interested in large-scale graph processing, it describes the fundamental abstractions of the system, its programming models and various techniques for using the system to process graph data at scale, including the implementation of several popular and advanced graph analytics algorithms.

  14. Segmentation by Large Scale Hypothesis Testing - Segmentation as Outlier Detection

    DEFF Research Database (Denmark)

    Darkner, Sune; Dahl, Anders Lindbjerg; Larsen, Rasmus


    locally. We propose a method based on large scale hypothesis testing with a consistent method for selecting an appropriate threshold for the given data. By estimating the background distribution we characterize the segment of interest as a set of outliers with a certain probability based on the estimated...... a microscope and we show how the method can handle transparent particles with significant glare point. The method generalizes to other problems. THis is illustrated by applying the method to camera calibration images and MRI of the midsagittal plane for gray and white matter separation and segmentation...

  15. Large scale PV plants - also in Denmark. Project report

    Energy Technology Data Exchange (ETDEWEB)

    Ahm, P. (PA Energy, Malling (Denmark)); Vedde, J. (SiCon. Silicon and PV consulting, Birkeroed (Denmark))


    Large scale PV (LPV) plants, plants with a capacity of more than 200 kW, has since 2007 constituted an increasing share of the global PV installations. In 2009 large scale PV plants with cumulative power more that 1,3 GWp were connected to the grid. The necessary design data for LPV plants in Denmark are available or can be found, although irradiance data could be improved. There seems to be very few institutional barriers for LPV projects, but as so far no real LPV projects have been processed, these findings have to be regarded as preliminary. The fast growing number of very large scale solar thermal plants for district heating applications supports these findings. It has further been investigated, how to optimize the lay-out of LPV plants. Under the Danish irradiance conditions with several winter months with very low solar height PV installations on flat surfaces will have to balance the requirements of physical space - and cost, and the loss of electricity production due to shadowing effects. The potential for LPV plants in Denmark are found in three main categories: PV installations on flat roof of large commercial buildings, PV installations on other large scale infrastructure such as noise barriers and ground mounted PV installations. The technical potential for all three categories is found to be significant and in the range of 50 - 250 km2. In terms of energy harvest PV plants will under Danish conditions exhibit an overall efficiency of about 10 % in conversion of the energy content of the light compared to about 0,3 % for biomass. The theoretical ground area needed to produce the present annual electricity consumption of Denmark at 33-35 TWh is about 300 km2 The Danish grid codes and the electricity safety regulations mention very little about PV and nothing about LPV plants. It is expected that LPV plants will be treated similarly to big wind turbines. A number of LPV plant scenarios have been investigated in detail based on real commercial offers and

  16. Reliability Evaluation considering Structures of a Large Scale Wind Farm

    DEFF Research Database (Denmark)

    Shin, Je-Seok; Cha, Seung-Tae; Wu, Qiuwei


    Wind energy is one of the most widely used renewable energy resources. Wind power has been connected to the grid as large scale wind farm which is made up of dozens of wind turbines, and the scale of wind farm is more increased recently. Due to intermittent and variable wind source, reliability...... wind farm which is able to enhance a capability of delivering a power instead of controlling an uncontrollable output of wind power. Therefore, this paper introduces a method to evaluate the reliability depending upon structures of wind farm and to reflect the result to the planning stage of wind farm....

  17. Large Scale Composite Manufacturing for Heavy Lift Launch Vehicles (United States)

    Stavana, Jacob; Cohen, Leslie J.; Houseal, Keth; Pelham, Larry; Lort, Richard; Zimmerman, Thomas; Sutter, James; Western, Mike; Harper, Robert; Stuart, Michael


    Risk reduction for the large scale composite manufacturing is an important goal to produce light weight components for heavy lift launch vehicles. NASA and an industry team successfully employed a building block approach using low-cost Automated Tape Layup (ATL) of autoclave and Out-of-Autoclave (OoA) prepregs. Several large, curved sandwich panels were fabricated at HITCO Carbon Composites. The aluminum honeycomb core sandwich panels are segments of a 1/16th arc from a 10 meter cylindrical barrel. Lessons learned highlight the manufacturing challenges required to produce light weight composite structures such as fairings for heavy lift launch vehicles.

  18. Large-scale cathodic carboxylation of copper surfaces


    Simonet, Jacques


    International audience; Large scale carboxylation of copper can easily be achieved by redn. of CO2 solubilised in aprotic polar solvents in the presence of tetramethylammonium salts (TMeA+ X-). Carbon dioxide could be inserted into the metal matrix (presumably in the form of the carbon dioxide anion radical) at high surface concns. (up to 10- 7 mol cm- 2), most probably organized in multi-layers. With significant amts. of electricity (> 0.1 × 10- 2 C cm- 2), this cathodic procedure leads to a...

  19. Participatory Design and the Challenges of Large-Scale Systems

    DEFF Research Database (Denmark)

    Simonsen, Jesper; Hertzum, Morten


    With its 10th biannual anniversary conference, Participatory Design (PD) is leaving its teens and must now be considered ready to join the adult world. In this article we encourage the PD community to think big: PD should engage in large-scale information-systems development and opt for a PD...... improvisational change management including anticipated, emergent, and opportunity-based change; and (3) extending initial design and development into a sustained and ongoing stepwise implementation that constitutes an overall technology-driven organizational change. The extended approach is exemplified through...

  20. Participatory Design of Large-Scale Information Systems

    DEFF Research Database (Denmark)

    Simonsen, Jesper; Hertzum, Morten


    In this article we discuss how to engage in large-scale information systems development by applying a participatory design (PD) approach that acknowledges the unique situated work practices conducted by the domain experts of modern organizations. We reconstruct the iterative prototyping approach...... into a PD process model that (1) emphasizes PD experiments as transcending traditional prototyping by evaluating fully integrated systems exposed to real work practices; (2) incorporates improvisational change management including anticipated, emergent, and opportunity-based change; and (3) extends initial...

  1. Solving Large-scale Eigenvalue Problems in SciDACApplications

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Chao


    Large-scale eigenvalue problems arise in a number of DOE applications. This paper provides an overview of the recent development of eigenvalue computation in the context of two SciDAC applications. We emphasize the importance of Krylov subspace methods, and point out its limitations. We discuss the value of alternative approaches that are more amenable to the use of preconditioners, and report the progression using the multi-level algebraic sub-structuring techniques to speed up eigenvalue calculation. In addition to methods for linear eigenvalue problems, we also examine new approaches to solving two types of non-linear eigenvalue problems arising from SciDAC applications.

  2. Large scale obscuration and related climate effects open literature bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Russell, N.A.; Geitgey, J.; Behl, Y.K.; Zak, B.D.


    Large scale obscuration and related climate effects of nuclear detonations first became a matter of concern in connection with the so-called ``Nuclear Winter Controversy`` in the early 1980`s. Since then, the world has changed. Nevertheless, concern remains about the atmospheric effects of nuclear detonations, but the source of concern has shifted. Now it focuses less on global, and more on regional effects and their resulting impacts on the performance of electro-optical and other defense-related systems. This bibliography reflects the modified interest.

  3. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard


    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  4. Large-Scale Experiments in a Sandy Aquifer in Denmark

    DEFF Research Database (Denmark)

    Jensen, Karsten Høgh; Bitsch, Karen Bue; Bjerg, Poul Løgstrup


    A large-scale natural gradient dispersion experiment was carried out in a sandy aquifer in the western part of Denmark using tritium and chloride as tracers. For both plumes a marked spreading was observed in the longitudinal direction while the spreading in the transverse horizontal and transverse...... vertical directions was very small. The horizontal transport parameters of the advection-dispersion equation were investigated by applying an optimization model to observed breakthrough curves of tritium representing depth averaged concentrations. No clear trend in dispersion parameters with travel...

  5. Large-scale File System Design and Architecture

    Directory of Open Access Journals (Sweden)

    V. Dynda


    Full Text Available This paper deals with design issues of a global file system, aiming to provide transparent data availability, security against loss and disclosure, and support for mobile and disconnected clients.First, the paper surveys general challenges and requirements for large-scale file systems, and then the design of particular elementary parts of the proposed file system is presented. This includes the design of the raw system architecture, the design of dynamic file replication with appropriate data consistency, file location and data security.Our proposed system is called Gaston, and will be referred further in the text under this name or its abbreviation GFS (Gaston File System.

  6. Large-Scale Query and XMatch, Entering the Parallel Zone (United States)

    Nieto-Santisteban, M. A.; Thakar, A. R.; Szalay, A. S.; Gray, J.


    Current and future astronomical surveys are producing catalogs with millions and billions of objects. On-line access to such big datasets for data mining and cross-correlation is usually as highly desired as unfeasible. Providing these capabilities is becoming critical for the Virtual Observatory framework. In this paper we present various performance tests that show how using Relational Database Management Systems (RDBMS) and a Zoning algorithm to partition and parallelize the computation, we can facilitate large-scale query and cross-match.

  7. Development of Large-Scale Spacecraft Fire Safety Experiments

    DEFF Research Database (Denmark)

    Ruff, Gary A.; Urban, David L.; Fernandez-Pello, A. Carlos


    with the data downlinked at the conclusion of the test before the Cygnus vehicle reenters the atmosphere. Several computer modeling and ground-based experiment efforts will complement the flight experiment effort. The international topical team is collaborating with the NASA team in the definition...... of the spacecraft fire safety risk. The activity of this project is supported by an international topical team of fire experts from other space agencies who conduct research that is integrated into the overall experiment design. The large-scale space flight experiment will be conducted in an Orbital Sciences...

  8. Infrastructure and interfaces for large-scale numerical software.

    Energy Technology Data Exchange (ETDEWEB)

    Freitag, L.; Gropp, W. D.; Hovland, P. D.; McInnes, L. C.; Smith, B. F.


    The complexity of large-scale scientific simulations often necessitates the combined use of multiple software packages developed by different groups in areas such as adaptive mesh manipulations, scalable algebraic solvers, and optimization. Historically, these packages have been combined by using custom code. This practice inhibits experimentation with and comparison of multiple tools that provide similar functionality through different implementations. The ALICE project, a collaborative effort among researchers at Argonne National Laboratory, is exploring the use of component-based software engineering to provide better interoperability among numerical toolkits. They discuss some initial experiences in developing an infrastructure and interfaces for high-performance numerical computing.

  9. Cosmology from large-scale structure observations: a subjective review (United States)

    Bilicki, Maciej; Hellwing, Wojciech A.


    In these lecture notes we give a brief overview of cosmological inference from the observed large-scale structure (LSS) of the Universe. After a general introduction, we briefly summarize the current status of the standard cosmological model, ΛCDM, and then discuss a few general puzzles related to this otherwise successful model. Next, after a concise presentation of LSS properties, we describe various observational cosmological probes, such as baryon acoustic oscillations, redshift space distortions and weak gravitational lensing. We also provide examples of how the rapidly developing technique of cross-correlations of cosmological datasets is applied. Finally, we briefly mention the promise brought to cosmology by gravitational wave detections.

  10. Large-Scale Training of SVMs with Automata Kernels (United States)

    Allauzen, Cyril; Cortes, Corinna; Mohri, Mehryar

    This paper presents a novel application of automata algorithms to machine learning. It introduces the first optimization solution for support vector machines used with sequence kernels that is purely based on weighted automata and transducer algorithms, without requiring any specific solver. The algorithms presented apply to a family of kernels covering all those commonly used in text and speech processing or computational biology. We show that these algorithms have significantly better computational complexity than previous ones and report the results of large-scale experiments demonstrating a dramatic reduction of the training time, typically by several orders of magnitude.

  11. Cost Overruns in Large-scale Transportation Infrastructure Projects

    DEFF Research Database (Denmark)

    Cantarelli, Chantal C; Flyvbjerg, Bent; Molin, Eric J. E


    Managing large-scale transportation infrastructure projects is difficult due to frequent misinformation about the costs which results in large cost overruns that often threaten the overall project viability. This paper investigates the explanations for cost overruns that are given in the literature....... Overall, four categories of explanations can be distinguished: technical, economic, psychological, and political. Political explanations have been seen to be the most dominant explanations for cost overruns. Agency theory is considered the most interesting for political explanations and an eclectic theory...

  12. Electric vehicles and large-scale integration of wind power

    DEFF Research Database (Denmark)

    Liu, Wen; Hu, Weihao; Lund, Henrik


    integration by 8%. The application of EVs benefits from saving both energy system cost and fuel cost. However, the negative consequences of decreasing energy system efficiency and increasing the CO2 emission should be noted when applying the hydrogen fuel cell vehicle (HFCV). The results also indicate...... was 6.5% in 2009 and which has the plan to develop large-scale wind power. The results show that electric vehicles (EVs) have the ability to balance the electricity demand and supply and to further the wind power integration. In the best case, the energy system with EV can increase wind power...

  13. Including investment risk in large-scale power market models

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard; Meibom, P.


    can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate......Long-term energy market models can be used to examine investments in production technologies, however, with market liberalisation it is crucial that such models include investment risks and investor behaviour. This paper analyses how the effect of investment risk on production technology selection...

  14. Cosmological constraints from large-scale structure growth rate measurements (United States)

    Pavlov, Anatoly; Farooq, Omer; Ratra, Bharat


    We compile a list of 14 independent measurements of a large-scale structure growth rate between redshifts 0.067≤z≤0.8 and use this to place constraints on model parameters of constant and time-evolving general-relativistic dark energy cosmologies. With the assumption that gravity is well modeled by general relativity, we discover that growth-rate data provide restrictive cosmological parameter constraints. In combination with type Ia supernova apparent magnitude versus redshift data and Hubble parameter measurements, the growth rate data are consistent with the standard spatially flat ΛCDM model, as well as with mildly evolving dark energy density cosmological models.

  15. Dynamic Modeling, Optimization, and Advanced Control for Large Scale Biorefineries

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail

    plant [3]. The goal of the project is to utilize realtime data extracted from the large scale facility to formulate and validate first principle dynamic models of the plant. These models are then further exploited to derive model-based tools for process optimization, advanced control and real...... with building a plantwide model-based optimization layer, which searches for optimal values regarding the pretreatment temperature, enzyme dosage in liquefaction, and yeast seed in fermentation such that profit is maximized [7]. When biomass is pretreated, by-products are also created that affect the downstream...

  16. Large Scale Simulations of the Euler Equations on GPU Clusters

    KAUST Repository

    Liebmann, Manfred


    The paper investigates the scalability of a parallel Euler solver, using the Vijayasundaram method, on a GPU cluster with 32 Nvidia Geforce GTX 295 boards. The aim of this research is to enable large scale fluid dynamics simulations with up to one billion elements. We investigate communication protocols for the GPU cluster to compensate for the slow Gigabit Ethernet network between the GPU compute nodes and to maintain overall efficiency. A diesel engine intake-port and a nozzle, meshed in different resolutions, give good real world examples for the scalability tests on the GPU cluster. © 2010 IEEE.

  17. Facile Large-Scale Synthesis of 5- and 6-Carboxyfluoresceins

    DEFF Research Database (Denmark)

    Hammershøj, Peter; Ek, Pramod Kumar; Harris, Pernille


    A series of fluorescein dyes have been prepared from a common precursor through a very simple synthetic procedure, giving access to important precursors for fluorescent probes. The method has proven an efficient access to regioisomerically pure 5- and 6-carboxyfluoresceins on a large scale, in good...... yields, and with high regioisomeric purity. Furthermore, we have applied the method to the development of a new type of mixed fluorescein derivatives of 5-carboxyfluorescein. We have demonstrated the scope of the procedure by synthesizing a new type of double chromophore within the fluoro-Jade family....

  18. Application of simplified models to CO2 migration and immobilization in large-scale geological systems

    KAUST Repository

    Gasda, Sarah E.


    Long-term stabilization of injected carbon dioxide (CO 2) is an essential component of risk management for geological carbon sequestration operations. However, migration and trapping phenomena are inherently complex, involving processes that act over multiple spatial and temporal scales. One example involves centimeter-scale density instabilities in the dissolved CO 2 region leading to large-scale convective mixing that can be a significant driver for CO 2 dissolution. Another example is the potentially important effect of capillary forces, in addition to buoyancy and viscous forces, on the evolution of mobile CO 2. Local capillary effects lead to a capillary transition zone, or capillary fringe, where both fluids are present in the mobile state. This small-scale effect may have a significant impact on large-scale plume migration as well as long-term residual and dissolution trapping. Computational models that can capture both large and small-scale effects are essential to predict the role of these processes on the long-term storage security of CO 2 sequestration operations. Conventional modeling tools are unable to resolve sufficiently all of these relevant processes when modeling CO 2 migration in large-scale geological systems. Herein, we present a vertically-integrated approach to CO 2 modeling that employs upscaled representations of these subgrid processes. We apply the model to the Johansen formation, a prospective site for sequestration of Norwegian CO 2 emissions, and explore the sensitivity of CO 2 migration and trapping to subscale physics. Model results show the relative importance of different physical processes in large-scale simulations. The ability of models such as this to capture the relevant physical processes at large spatial and temporal scales is important for prediction and analysis of CO 2 storage sites. © 2012 Elsevier Ltd.

  19. Large-scale stabilization control of input-constrained quadrotor

    Directory of Open Access Journals (Sweden)

    Jun Jiang


    Full Text Available The quadrotor has been the most popular aircraft in the last decade due to its excellent dynamics and continues to attract ever-increasing research interest. Delivering a quadrotor from a large fixed-wing aircraft is a promising application of quadrotors. In such an application, the quadrotor needs to switch from a highly unstable status, featured as large initial states, to a safe and stable flight status. This is the so-called large-scale stability control problem. In such an extreme scenario, the quadrotor is at risk of actuator saturation. This can cause the controller to update incorrectly and lead the quadrotor to spiral and crash. In this article, to safely control the quadrotor in such scenarios, the control input constraint is analyzed. The key states of a quadrotor dynamic model are selected, and a two-dimensional dynamic model is extracted based on a symmetrical body configuration. A generalized point-wise min-norm nonlinear control method is proposed based on the Lyapunov function, and large-scale stability control is hence achieved. An enhanced point-wise, min-norm control is further provided to improve the attitude control performance, with altitude performance degenerating slightly. Simulation results showed that the proposed control methods can stabilize the input-constrained quadrotor and the enhanced method can improve the performance of the quadrotor in critical states.

  20. Practical considerations for large-scale gut microbiome studies. (United States)

    Vandeputte, Doris; Tito, Raul Y; Vanleeuwen, Rianne; Falony, Gwen; Raes, Jeroen


    First insights on the human gut microbiome have been gained from medium-sized, cross-sectional studies. However, given the modest portion of explained variance of currently identified covariates and the small effect size of gut microbiota modulation strategies, upscaling seems essential for further discovery and characterisation of the multiple influencing factors and their relative contribution. In order to guide future research projects and standardisation efforts, we here review currently applied collection and preservation methods for gut microbiome research. We discuss aspects such as sample quality, applicable omics techniques, user experience and time and cost efficiency. In addition, we evaluate the protocols of a large-scale microbiome cohort initiative, the Flemish Gut Flora Project, to give an idea of perspectives, and pitfalls of large-scale faecal sampling studies. Although cryopreservation can be regarded as the gold standard, freezing protocols generally require more resources due to cold chain management. However, here we show that much can be gained from an optimised transport chain and sample aliquoting before freezing. Other protocols can be useful as long as they preserve the microbial signature of a sample such that relevant conclusions can be drawn regarding the research question, and the obtained data are stable and reproducible over time. © FEMS 2017.

  1. Large Scale Community Detection Using a Small World Model

    Directory of Open Access Journals (Sweden)

    Ranjan Kumar Behera


    Full Text Available In a social network, small or large communities within the network play a major role in deciding the functionalities of the network. Despite of diverse definitions, communities in the network may be defined as the group of nodes that are more densely connected as compared to nodes outside the group. Revealing such hidden communities is one of the challenging research problems. A real world social network follows small world phenomena, which indicates that any two social entities can be reachable in a small number of steps. In this paper, nodes are mapped into communities based on the random walk in the network. However, uncovering communities in large-scale networks is a challenging task due to its unprecedented growth in the size of social networks. A good number of community detection algorithms based on random walk exist in literature. In addition, when large-scale social networks are being considered, these algorithms are observed to take considerably longer time. In this work, with an objective to improve the efficiency of algorithms, parallel programming framework like Map-Reduce has been considered for uncovering the hidden communities in social network. The proposed approach has been compared with some standard existing community detection algorithms for both synthetic and real-world datasets in order to examine its performance, and it is observed that the proposed algorithm is more efficient than the existing ones.

  2. Very-large-scale coherent motions in open channel flows (United States)

    Zhong, Qiang; Hussain, Fazle; Li, Dan-Xun


    Very-large-scale coherent structures (VLSSs) - whose characteristic length is of the order of 10 h (h is the water depth) - are found to exist in the log and outer layers near the bed of open channel flows. For decades researchers have speculated that large coherent structures may exist in open channel flows. However, conclusive evidence is still lacking. The present study employed pre-multiplied velocity power spectral and co-spectral analyses of time-resolved PIV data obtained in open channel flows. In all cases, two modes - large-scale structures (of the order of h) and VLSSs - dominate the log and outer layers of the turbulent boundary layer. More than half of TKE and 40% of the Reynolds shear stress in the log and outer layers are contributed by VLSSs. The strength difference of VLSSs between open and closed channel flows leads to pronounced redistribution of TKE near the free surface of open channel flows, which is a unique phenomenon that sets the open channel flows apart from other wall-bounded turbulent flows. Funded by China Postdoctoral Science Foundation (No.2015M580105), National Natural Science Foundation of China (No.51127006).

  3. Evaluating Unmanned Aerial Platforms for Cultural Heritage Large Scale Mapping (United States)

    Georgopoulos, A.; Oikonomou, C.; Adamopoulos, E.; Stathopoulou, E. K.


    When it comes to large scale mapping of limited areas especially for cultural heritage sites, things become critical. Optical and non-optical sensors are developed to such sizes and weights that can be lifted by such platforms, like e.g. LiDAR units. At the same time there is an increase in emphasis on solutions that enable users to get access to 3D information faster and cheaper. Considering the multitude of platforms, cameras and the advancement of algorithms in conjunction with the increase of available computing power this challenge should and indeed is further investigated. In this paper a short review of the UAS technologies today is attempted. A discussion follows as to their applicability and advantages, depending on their specifications, which vary immensely. The on-board cameras available are also compared and evaluated for large scale mapping. Furthermore a thorough analysis, review and experimentation with different software implementations of Structure from Motion and Multiple View Stereo algorithms, able to process such dense and mostly unordered sequence of digital images is also conducted and presented. As test data set, we use a rich optical and thermal data set from both fixed wing and multi-rotor platforms over an archaeological excavation with adverse height variations and using different cameras. Dense 3D point clouds, digital terrain models and orthophotos have been produced and evaluated for their radiometric as well as metric qualities.

  4. Remote Sensing Image Classification With Large-Scale Gaussian Processes (United States)

    Morales-Alvarez, Pablo; Perez-Suay, Adrian; Molina, Rafael; Camps-Valls, Gustau


    Current remote sensing image classification problems have to deal with an unprecedented amount of heterogeneous and complex data sources. Upcoming missions will soon provide large data streams that will make land cover/use classification difficult. Machine learning classifiers can help at this, and many methods are currently available. A popular kernel classifier is the Gaussian process classifier (GPC), since it approaches the classification problem with a solid probabilistic treatment, thus yielding confidence intervals for the predictions as well as very competitive results to state-of-the-art neural networks and support vector machines. However, its computational cost is prohibitive for large scale applications, and constitutes the main obstacle precluding wide adoption. This paper tackles this problem by introducing two novel efficient methodologies for Gaussian Process (GP) classification. We first include the standard random Fourier features approximation into GPC, which largely decreases its computational cost and permits large scale remote sensing image classification. In addition, we propose a model which avoids randomly sampling a number of Fourier frequencies, and alternatively learns the optimal ones within a variational Bayes approach. The performance of the proposed methods is illustrated in complex problems of cloud detection from multispectral imagery and infrared sounding data. Excellent empirical results support the proposal in both computational cost and accuracy.

  5. The combustion behavior of large scale lithium titanate battery (United States)

    Huang, Peifeng; Wang, Qingsong; Li, Ke; Ping, Ping; Sun, Jinhua


    Safety problem is always a big obstacle for lithium battery marching to large scale application. However, the knowledge on the battery combustion behavior is limited. To investigate the combustion behavior of large scale lithium battery, three 50 Ah Li(NixCoyMnz)O2/Li4Ti5O12 batteries under different state of charge (SOC) were heated to fire. The flame size variation is depicted to analyze the combustion behavior directly. The mass loss rate, temperature and heat release rate are used to analyze the combustion behavior in reaction way deeply. Based on the phenomenon, the combustion process is divided into three basic stages, even more complicated at higher SOC with sudden smoke flow ejected. The reason is that a phase change occurs in Li(NixCoyMnz)O2 material from layer structure to spinel structure. The critical temperatures of ignition are at 112–121°C on anode tab and 139 to 147°C on upper surface for all cells. But the heating time and combustion time become shorter with the ascending of SOC. The results indicate that the battery fire hazard increases with the SOC. It is analyzed that the internal short and the Li+ distribution are the main causes that lead to the difference. PMID:25586064

  6. Large Scale Land Acquisition as a driver of slope instability (United States)

    Danilo Chiarelli, Davide; Rulli, Maria Cristina; Davis, Kyle F.; D'Odorico, Paolo


    Forests play a key role in preventing shallow landslides and deforestation has been analyzed as one of the main causes of increased mass wasting in hillsplopes undergoing land cover change. In the last few years vast tracts of lands have been acquired by foreign investors to satisfy an increasing demand for agricultural products. Large Scale Land Acquisitions (LSLA) often entail the conversion of forested landscapes into agricultural fields. Mozambique has been a major target of LSLAs and there is evidence that many of the acquired land have recently undergone forest clearing. The Zambezia Province in Mozambique has lost more than 500000ha of forest from 2000 to 2014; 25.4% of them were in areas acquired by large scale land investors. According to Land Matrix, an open-source database of reported land deals, there are currently 123 intended and confirmed deals in Mozambique; collectively, they account for 2.34million ha, the majority of which are located in forested areas. This study analyses the relationship between deforestation taking place inside LSLA areas(usually for agricultural purpose) and the likelihood of landslides occurrence in the Zambezia province in Mozambique. To this aim we use a spatially distributed and physically based model that couples slope stability analysis with a hillslope scale hydrological model and we compare the change in slope stability associated the forest loss documented by satellite imagery.

  7. Large-Scale CFD Parallel Computing Dealing with Massive Mesh

    Directory of Open Access Journals (Sweden)

    Zhi Shang


    Full Text Available In order to run CFD codes more efficiently on large scales, the parallel computing has to be employed. For example, in industrial scales, it usually uses tens of thousands of mesh cells to capture the details of complex geometries. How to distribute these mesh cells among the multiprocessors for obtaining a good parallel computing performance (HPC is really a challenge. Due to dealing with the massive mesh cells, it is difficult for the CFD codes without parallel optimizations to handle this kind of large-scale computing. Some of the open source mesh partitioning software packages, such as Metis, ParMetis, Scotch, PT-Scotch, and Zoltan, are able to deal with the distribution of large number of mesh cells. Therefore they were employed as the parallel optimization tools ported into Code_Saturne, an open source CFD code, for testing if they can solve the issue of dealing with massive mesh cells for CFD codes. Through the studies, it was found that the mesh partitioning optimization software packages can help CFD codes not only deal with massive mesh cells but also have a good HPC.

  8. Maestro: An Orchestration Framework for Large-Scale WSN Simulations

    Directory of Open Access Journals (Sweden)

    Laurynas Riliskis


    Full Text Available Contemporary wireless sensor networks (WSNs have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VMinstances that provide an optimal balance of performance and cost for a given simulation.

  9. Survey of large-scale isotope applications: nuclear technology field

    Energy Technology Data Exchange (ETDEWEB)

    Dewitt, R.


    A preliminary literature survey of potential large-scale isotope applications was made according to topical fields; i.e., nuclear, biological, medical, environmental, agricultural, geological, and industrial. Other than the possible expansion of established large-scale isotope applications such as uranium, boron, lithium, and hydrogen, no new immediate isotope usage appears to be developing. Over the long term a change in emphasis for isotope applications was identified which appears to be more responsive to societal concerns for health, the environment, and the conservation of materials and energy. For gram-scale applications, a variety of isotopes may be required for use as nonradioactive ''activable'' tracers. A more detailed survey of the nuclear field identified a potential need for large amounts (tons) of special isotopic materials for advanced reactor components and structures. At this need for special materials and the development of efficient separation methods progresses, the utilization of isotopes from nuclear wastes for beneficial uses should also progress.

  10. Maestro: An Orchestration Framework for Large-Scale WSN Simulations (United States)

    Riliskis, Laurynas; Osipov, Evgeny


    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  11. Large scale floodplain mapping using a hydrogeomorphic method (United States)

    Nardi, F.; Yan, K.; Di Baldassarre, G.; Grimaldi, S.


    Floodplain landforms are clearly distinguishable as respect to adjacent hillslopes being the trace of severe floods that shaped the terrain. As a result digital topography intrinsically contains the floodplain information, this works presents the results of the application of a DEM-based large scale hydrogeomorphic floodplain delineation method. The proposed approach, based on the integration of terrain analysis algorithms in a GIS framework, automatically identifies the potentially frequently saturated zones of riparian areas by analysing the maximum flood flow heights associated to stream network nodes as respect to surrounding uplands. Flow heights are estimated by imposing a Leopold's law that scales with the contributing area. Presented case studies include the floodplain map of large river basins for the entire Italian territory , that are also used for calibrating the Leopold scaling parameters, as well as additional large international river basins in different climatic and geomorphic characteristics posing the base for the use of such approach for global floodplain mapping. The proposed tool could be useful to detect the hydrological change since it can easily provide maps to verify the flood impact on human activities and vice versa how the human activities changed in floodplain areas at large scale.

  12. Development of large-scale functional brain networks in children.

    Directory of Open Access Journals (Sweden)

    Kaustubh Supekar


    Full Text Available The ontogeny of large-scale functional organization of the human brain is not well understood. Here we use network analysis of intrinsic functional connectivity to characterize the organization of brain networks in 23 children (ages 7-9 y and 22 young-adults (ages 19-22 y. Comparison of network properties, including path-length, clustering-coefficient, hierarchy, and regional connectivity, revealed that although children and young-adults' brains have similar "small-world" organization at the global level, they differ significantly in hierarchical organization and interregional connectivity. We found that subcortical areas were more strongly connected with primary sensory, association, and paralimbic areas in children, whereas young-adults showed stronger cortico-cortical connectivity between paralimbic, limbic, and association areas. Further, combined analysis of functional connectivity with wiring distance measures derived from white-matter fiber tracking revealed that the development of large-scale brain networks is characterized by weakening of short-range functional connectivity and strengthening of long-range functional connectivity. Importantly, our findings show that the dynamic process of over-connectivity followed by pruning, which rewires connectivity at the neuronal level, also operates at the systems level, helping to reconfigure and rebalance subcortical and paralimbic connectivity in the developing brain. Our study demonstrates the usefulness of network analysis of brain connectivity to elucidate key principles underlying functional brain maturation, paving the way for novel studies of disrupted brain connectivity in neurodevelopmental disorders such as autism.

  13. The effective field theory of cosmological large scale structures

    Energy Technology Data Exchange (ETDEWEB)

    Carrasco, John Joseph M. [Stanford Univ., Stanford, CA (United States); Hertzberg, Mark P. [Stanford Univ., Stanford, CA (United States); SLAC National Accelerator Lab., Menlo Park, CA (United States); Senatore, Leonardo [Stanford Univ., Stanford, CA (United States); SLAC National Accelerator Lab., Menlo Park, CA (United States)


    Large scale structure surveys will likely become the next leading cosmological probe. In our universe, matter perturbations are large on short distances and small at long scales, i.e. strongly coupled in the UV and weakly coupled in the IR. To make precise analytical predictions on large scales, we develop an effective field theory formulated in terms of an IR effective fluid characterized by several parameters, such as speed of sound and viscosity. These parameters, determined by the UV physics described by the Boltzmann equation, are measured from N-body simulations. We find that the speed of sound of the effective fluid is c2s ≈ 10–6c2 and that the viscosity contributions are of the same order. The fluid describes all the relevant physics at long scales k and permits a manifestly convergent perturbative expansion in the size of the matter perturbations δ(k) for all the observables. As an example, we calculate the correction to the power spectrum at order δ(k)4. As a result, the predictions of the effective field theory are found to be in much better agreement with observation than standard cosmological perturbation theory, already reaching percent precision at this order up to a relatively short scale k ≃ 0.24h Mpc–1.


    Directory of Open Access Journals (Sweden)

    Enrico Adriano Raffaelli


    Full Text Available In light of the slow modernization of the Italian large-scale food distribution sector, of the fragmentation at national level, of the significant roles of the cooperatives at local level and of the alliances between food retail chains, the ICA during the recent years has developed a strong interest in this sector.After having analyzed the peculiarities of the Italian large-scale food distribution sector, this article shows the recent approach taken by the ICA toward the main antitrust issues in this sector.In the analysis of such issues, mainly the contractual relations between the GDO retailers and their suppliers, the introduction of Article 62 of Law no. 27 dated 24th March 2012 is crucial, because, by facilitating and encouraging complaints by the interested parties, it should allow the developing of normal competitive dynamics within the food distribution sector, where companies should be free to enter the market using the tools at their disposal, without undue restrictions.

  15. Auxanographic Carbohydrate Assimilation Method for Large Scale Yeast Identification. (United States)

    Devadas, Suganthi Martena; Ballal, Mamatha; Prakash, Peralam Yegneswaran; Hande, Manjunath H; Bhat, Geetha V; Mohandas, Vinitha


    The auxanographic carbohydrate assimilation had been an important method for differentiation of yeasts. Prevailing methods described in the literature for carbohydrate assimilation has limited scope for use in large scale yeast identification. To optimize the large scale auxanographic carbohydrate assimilation method for yeast identification. A modified auxanographic carbohydrate assimilation method was developed and a total of 35 isolates of Candida species comprising of four ATCC (American Type Culture Collection) Candida strains ( Candida albicans ATCC 90028, Candida tropicalis ATCC 90018, Candida parapsilosis ATCC 750, Candida krusei ATCC 6258) and 31 clinical isolates of Candida tropicalis (n=13), Candida krusei (n=7), Candida glabrata (n=3), Candida kefyr (n=3), Candida albicans (n=5) were validated. The carbohydrates tested were Glucose, Sucrose, Maltose, Lactose, Cellubiose, Raffinose, Trehalose, Xylose, Galactose and Dulcitol. A total of 35 Candida species were tested for their carbohydrate assimilative property and the results were consistent with the existing standard protocols. A well circumscribed opaque yeast growth indicated assimilation of the test carbohydrate and translucent to opalescent growth with the outline of initial inoculum alone indicated lack of assimilation. The control plate indicated no growth of the Candida species. The carbohydrate assimilation tests finds utility for yeast diversity studies exploring novel ecological niches. The technique described here facilitates testing of an extended range of carbohydrates and yeasts in a cost effective manner.

  16. Maestro: an orchestration framework for large-scale WSN simulations. (United States)

    Riliskis, Laurynas; Osipov, Evgeny


    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation.

  17. Large-scale climatic control on European precipitation (United States)

    Lavers, David; Prudhomme, Christel; Hannah, David


    Precipitation variability has a significant impact on society. Sectors such as agriculture and water resources management are reliant on predictable and reliable precipitation supply with extreme variability having potentially adverse socio-economic impacts. Therefore, understanding the climate drivers of precipitation is of human relevance. This research examines the strength, location and seasonality of links between precipitation and large-scale Mean Sea Level Pressure (MSLP) fields across Europe. In particular, we aim to evaluate whether European precipitation is correlated with the same atmospheric circulation patterns or if there is a strong spatial and/or seasonal variation in the strength and location of centres of correlations. The work exploits time series of gridded ERA-40 MSLP on a 2.5˚×2.5˚ grid (0˚N-90˚N and 90˚W-90˚E) and gridded European precipitation from the Ensemble project on a 0.5°×0.5° grid (36.25˚N-74.25˚N and 10.25˚W-24.75˚E). Monthly Spearman rank correlation analysis was performed between MSLP and precipitation. During winter, a significant MSLP-precipitation correlation dipole pattern exists across Europe. Strong negative (positive) correlation located near the Icelandic Low and positive (negative) correlation near the Azores High pressure centres are found in northern (southern) Europe. These correlation dipoles resemble the structure of the North Atlantic Oscillation (NAO). The reversal in the correlation dipole patterns occurs at the latitude of central France, with regions to the north (British Isles, northern France, Scandinavia) having a positive relationship with the NAO, and regions to the south (Italy, Portugal, southern France, Spain) exhibiting a negative relationship with the NAO. In the lee of mountain ranges of eastern Britain and central Sweden, correlation with North Atlantic MSLP is reduced, reflecting a reduced influence of westerly flow on precipitation generation as the mountains act as a barrier to moist

  18. Creating restoration landscapes: partnerships in large-scale conservation in the UK

    Directory of Open Access Journals (Sweden)

    William M. Adams


    Full Text Available It is increasingly recognized that ecological restoration demands conservation action beyond the borders of existing protected areas. This requires the coordination of land uses and management over a larger area, usually with a range of partners, which presents novel institutional challenges for conservation planners. Interviews were undertaken with managers of a purposive sample of large-scale conservation areas in the UK. Interviews were open-ended and analyzed using standard qualitative methods. Results show a wide variety of organizations are involved in large-scale conservation projects, and that partnerships take time to create and demand resilience in the face of different organizational practices, staff turnover, and short-term funding. Successful partnerships with local communities depend on the establishment of trust and the availability of external funds to support conservation land uses. We conclude that there is no single institutional model for large-scale conservation: success depends on finding institutional strategies that secure long-term conservation outcomes, and ensure that conservation gains are not reversed when funding runs out, private owners change priorities, or land changes hands.

  19. Large-scale doming on Europa: A model of formation of Thera Macula (United States)

    Mével, Loïc; Mercier, Eric


    Since Galileo spacecraft reveals Europa's surface at high and medium resolutions, deformation and processes affecting the relatively young surface have been more accurately defined. This work reports the observations carried out on a large-scale feature of the south hemisphere, Thera Macula. It is shown that Thera presents common points with many other features including small-scale domes, lenticulae and large-scale chaotic areas (disrupted ancient surfaces lying on a dark matrix), but remains singular through its asymmetric morphology. On the basis of observations, we propose a scenario for the setting of Thera Macula. It involves a large-scale doming (40-70 km in radius) of the pre-existing surface associated with ductile deformations, and the consecutive collapse of the created megadome associated with brittle disruption of blocks and flow of low viscosity material over the surrounding ridged plains. The processes responsible for each stage of the proposed scenario have been investigated. Both cryomagmatic and diapiric origins are discussed and confronted by observations. Finally, comparison of similar features at various scales suggests that Thera Macula by its originalities (asymmetry, rounding bulge) may have preserved the intermediate stages of the formation of subcircular chaos at least up to about 50 km in radius. A common evolution and endogenic origin for multi-scale hot spot features is proposed: (1) the doming stage, (2) the collapse and extrusion stage and (3) the relaxation stage.

  20. Building Participation in Large-scale Conservation: Lessons from Belize and Panama

    Directory of Open Access Journals (Sweden)

    Jesse Guite Hastings


    Full Text Available Motivated by biogeography and a desire for alignment with the funding priorities of donors, the twenty-first century has seen big international NGOs shifting towards a large-scale conservation approach. This shift has meant that even before stakeholders at the national and local scale are involved, conservation programmes often have their objectives defined and funding allocated. This paper uses the experiences of Conservation International′s Marine Management Area Science (MMAS programme in Belize and Panama to explore how to build participation at the national and local scale while working within the bounds of the current conservation paradigm. Qualitative data about MMAS was gathered through a multi-sited ethnographic research process, utilising document review, direct observation, and semi-structured interviews with 82 informants in Belize, Panama, and the United States of America. Results indicate that while a large-scale approach to conservation disadvantages early national and local stakeholder participation, this effect can be mediated through focusing engagement efforts, paying attention to context, building horizontal and vertical partnerships, and using deliberative processes that promote learning. While explicit consideration of geopolitics and local complexity alongside biogeography in the planning phase of a large-scale conservation programme is ideal, actions taken by programme managers during implementation can still have a substantial impact on conservation outcomes.

  1. Integration and segregation of large-scale brain networks during short-term task automatization. (United States)

    Mohr, Holger; Wolfensteller, Uta; Betzel, Richard F; Mišić, Bratislav; Sporns, Olaf; Richiardi, Jonas; Ruge, Hannes


    The human brain is organized into large-scale functional networks that can flexibly reconfigure their connectivity patterns, supporting both rapid adaptive control and long-term learning processes. However, it has remained unclear how short-term network dynamics support the rapid transformation of instructions into fluent behaviour. Comparing fMRI data of a learning sample (N=70) with a control sample (N=67), we find that increasingly efficient task processing during short-term practice is associated with a reorganization of large-scale network interactions. Practice-related efficiency gains are facilitated by enhanced coupling between the cingulo-opercular network and the dorsal attention network. Simultaneously, short-term task automatization is accompanied by decreasing activation of the fronto-parietal network, indicating a release of high-level cognitive control, and a segregation of the default mode network from task-related networks. These findings suggest that short-term task automatization is enabled by the brain's ability to rapidly reconfigure its large-scale network organization involving complementary integration and segregation processes.

  2. Large scale Brownian dynamics of confined suspensions of rigid particles. (United States)

    Sprinkle, Brennan; Balboa Usabiaga, Florencio; Patankar, Neelesh A; Donev, Aleksandar


    We introduce methods for large-scale Brownian Dynamics (BD) simulation of many rigid particles of arbitrary shape suspended in a fluctuating fluid. Our method adds Brownian motion to the rigid multiblob method [F. Balboa Usabiaga et al., Commun. Appl. Math. Comput. Sci. 11(2), 217-296 (2016)] at a cost comparable to the cost of deterministic simulations. We demonstrate that we can efficiently generate deterministic and random displacements for many particles using preconditioned Krylov iterative methods, if kernel methods to efficiently compute the action of the Rotne-Prager-Yamakawa (RPY) mobility matrix and its "square" root are available for the given boundary conditions. These kernel operations can be computed with near linear scaling for periodic domains using the positively split Ewald method. Here we study particles partially confined by gravity above a no-slip bottom wall using a graphical processing unit implementation of the mobility matrix-vector product, combined with a preconditioned Lanczos iteration for generating Brownian displacements. We address a major challenge in large-scale BD simulations, capturing the stochastic drift term that arises because of the configuration-dependent mobility. Unlike the widely used Fixman midpoint scheme, our methods utilize random finite differences and do not require the solution of resistance problems or the computation of the action of the inverse square root of the RPY mobility matrix. We construct two temporal schemes which are viable for large-scale simulations, an Euler-Maruyama traction scheme and a trapezoidal slip scheme, which minimize the number of mobility problems to be solved per time step while capturing the required stochastic drift terms. We validate and compare these schemes numerically by modeling suspensions of boomerang-shaped particles sedimented near a bottom wall. Using the trapezoidal scheme, we investigate the steady-state active motion in dense suspensions of confined microrollers, whose

  3. Large scale Brownian dynamics of confined suspensions of rigid particles (United States)

    Sprinkle, Brennan; Balboa Usabiaga, Florencio; Patankar, Neelesh A.; Donev, Aleksandar


    We introduce methods for large-scale Brownian Dynamics (BD) simulation of many rigid particles of arbitrary shape suspended in a fluctuating fluid. Our method adds Brownian motion to the rigid multiblob method [F. Balboa Usabiaga et al., Commun. Appl. Math. Comput. Sci. 11(2), 217-296 (2016)] at a cost comparable to the cost of deterministic simulations. We demonstrate that we can efficiently generate deterministic and random displacements for many particles using preconditioned Krylov iterative methods, if kernel methods to efficiently compute the action of the Rotne-Prager-Yamakawa (RPY) mobility matrix and its "square" root are available for the given boundary conditions. These kernel operations can be computed with near linear scaling for periodic domains using the positively split Ewald method. Here we study particles partially confined by gravity above a no-slip bottom wall using a graphical processing unit implementation of the mobility matrix-vector product, combined with a preconditioned Lanczos iteration for generating Brownian displacements. We address a major challenge in large-scale BD simulations, capturing the stochastic drift term that arises because of the configuration-dependent mobility. Unlike the widely used Fixman midpoint scheme, our methods utilize random finite differences and do not require the solution of resistance problems or the computation of the action of the inverse square root of the RPY mobility matrix. We construct two temporal schemes which are viable for large-scale simulations, an Euler-Maruyama traction scheme and a trapezoidal slip scheme, which minimize the number of mobility problems to be solved per time step while capturing the required stochastic drift terms. We validate and compare these schemes numerically by modeling suspensions of boomerang-shaped particles sedimented near a bottom wall. Using the trapezoidal scheme, we investigate the steady-state active motion in dense suspensions of confined microrollers, whose

  4. The predictability of large-scale wind-driven flows

    Directory of Open Access Journals (Sweden)

    A. Mahadevan


    Full Text Available The singular values associated with optimally growing perturbations to stationary and time-dependent solutions for the general circulation in an ocean basin provide a measure of the rate at which solutions with nearby initial conditions begin to diverge, and hence, a measure of the predictability of the flow. In this paper, the singular vectors and singular values of stationary and evolving examples of wind-driven, double-gyre circulations in different flow regimes are explored. By changing the Reynolds number in simple quasi-geostrophic models of the wind-driven circulation, steady, weakly aperiodic and chaotic states may be examined. The singular vectors of the steady state reveal some of the physical mechanisms responsible for optimally growing perturbations. In time-dependent cases, the dominant singular values show significant variability in time, indicating strong variations in the predictability of the flow. When the underlying flow is weakly aperiodic, the dominant singular values co-vary with integral measures of the large-scale flow, such as the basin-integrated upper ocean kinetic energy and the transport in the western boundary current extension. Furthermore, in a reduced gravity quasi-geostrophic model of a weakly aperiodic, double-gyre flow, the behaviour of the dominant singular values may be used to predict a change in the large-scale flow, a feature not shared by an analogous two-layer model. When the circulation is in a strongly aperiodic state, the dominant singular values no longer vary coherently with integral measures of the flow. Instead, they fluctuate in a very aperiodic fashion on mesoscale time scales. The dominant singular vectors then depend strongly on the arrangement of mesoscale features in the flow and the evolved forms of the associated singular vectors have relatively short spatial scales. These results have several implications. In weakly aperiodic, periodic, and stationary regimes, the mesoscale energy

  5. The Genetic Etiology of Tourette Syndrome: Large-Scale Collaborative Efforts on the Precipice of Discovery (United States)

    Georgitsi, Marianthi; Willsey, A. Jeremy; Mathews, Carol A.; State, Matthew; Scharf, Jeremiah M.; Paschou, Peristera


    Gilles de la Tourette Syndrome (TS) is a childhood-onset neurodevelopmental disorder that is characterized by multiple motor and phonic tics. It has a complex etiology with multiple genes likely interacting with environmental factors to lead to the onset of symptoms. The genetic basis of the disorder remains elusive. However, multiple resources and large-scale projects are coming together, launching a new era in the field and bringing us on the verge of discovery. The large-scale efforts outlined in this report are complementary and represent a range of different approaches to the study of disorders with complex inheritance. The Tourette Syndrome Association International Consortium for Genetics (TSAICG) has focused on large families, parent-proband trios and cases for large case-control designs such as genomewide association studies (GWAS), copy number variation (CNV) scans, and exome/genome sequencing. TIC Genetics targets rare, large effect size mutations in simplex trios, and multigenerational families. The European Multicentre Tics in Children Study (EMTICS) seeks to elucidate gene-environment interactions including the involvement of infection and immune mechanisms in TS etiology. Finally, TS-EUROTRAIN, a Marie Curie Initial Training Network, aims to act as a platform to unify large-scale projects in the field and to educate the next generation of experts. Importantly, these complementary large-scale efforts are joining forces to uncover the full range of genetic variation and environmental risk factors for TS, holding great promise for identifying definitive TS susceptibility genes and shedding light into the complex pathophysiology of this disorder. PMID:27536211

  6. Visualizing large-scale uncertainty in astrophysical data. (United States)

    Li, Hongwei; Fu, Chi-Wing; Li, Yinggang; Hanson, Andrew


    Visualization of uncertainty or error in astrophysical data is seldom available in simulations of astronomical phenomena, and yet almost all rendered attributes possess some degree of uncertainty due to observational error. Uncertainties associated with spatial location typically vary signicantly with scale and thus introduce further complexity in the interpretation of a given visualization. This paper introduces effective techniques for visualizing uncertainty in large-scale virtual astrophysical environments. Building upon our previous transparently scalable visualization architecture, we develop tools that enhance the perception and comprehension of uncertainty across wide scale ranges. Our methods include a unified color-coding scheme for representing log-scale distances and percentage errors, an ellipsoid model to represent positional uncertainty, an ellipsoid envelope model to expose trajectory uncertainty, and a magic-glass design supporting the selection of ranges of log-scale distance and uncertainty parameters, as well as an overview mode and a scalable WIM tool for exposing the magnitudes of spatial context and uncertainty.

  7. A Large-Scale 3D Object Recognition dataset

    DEFF Research Database (Denmark)

    Sølund, Thomas; Glent Buch, Anders; Krüger, Norbert


    This paper presents a new large scale dataset targeting evaluation of local shape descriptors and 3d object recognition algorithms. The dataset consists of point clouds and triangulated meshes from 292 physical scenes taken from 11 different views; a total of approximately 3204 views. Each...... of the physical scenes contain 10 occluded objects resulting in a dataset with 32040 unique object poses and 45 different object models. The 45 object models are full 360 degree models which are scanned with a high precision structured light scanner and a turntable. All the included objects belong to different...... geometric groups; concave, convex, cylindrical and flat 3D object models. The object models have varying amount of local geometric features to challenge existing local shape feature descriptors in terms of descriptiveness and robustness. The dataset is validated in a benchmark which evaluates the matching...

  8. How Large-Scale Research Facilities Connect to Global Research

    DEFF Research Database (Denmark)

    Lauto, Giancarlo; Valentin, Finn


    institutional settings. Policies mandating LSRFs should consider that research prioritized on the basis of technological relevance limits the international reach of collaborations. Additionally, the propensity for international collaboration is lower for resident scientists than for those affiliated......Policies for large-scale research facilities (LSRFs) often highlight their spillovers to industrial innovation and their contribution to the external connectivity of the regional innovation system hosting them. Arguably, the particular institutional features of LSRFs are conducive for collaborative...... research. However, based on data on publications produced in 2006–2009 at the Neutron Science Directorate of Oak Ridge National Laboratory in Tennessee (United States), we find that internationalization of its collaborative research is restrained by coordination costs similar to those characterizing other...

  9. Transition from large-scale to small-scale dynamo. (United States)

    Ponty, Y; Plunian, F


    The dynamo equations are solved numerically with a helical forcing corresponding to the Roberts flow. In the fully turbulent regime the flow behaves as a Roberts flow on long time scales, plus turbulent fluctuations at short time scales. The dynamo onset is controlled by the long time scales of the flow, in agreement with the former Karlsruhe experimental results. The dynamo mechanism is governed by a generalized α effect, which includes both the usual α effect and turbulent diffusion, plus all higher order effects. Beyond the onset we find that this generalized α effect scales as O(Rm(-1)), suggesting the takeover of small-scale dynamo action. This is confirmed by simulations in which dynamo occurs even if the large-scale field is artificially suppressed.

  10. Planning under uncertainty solving large-scale stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G. [Stanford Univ., CA (United States). Dept. of Operations Research]|[Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft


    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  11. Recovery Act - Large Scale SWNT Purification and Solubilization

    Energy Technology Data Exchange (ETDEWEB)

    Michael Gemano; Dr. Linda B. McGown


    The goal of this Phase I project was to establish a quantitative foundation for development of binary G-gels for large-scale, commercial processing of SWNTs and to develop scientific insight into the underlying mechanisms of solubilization, selectivity and alignment. In order to accomplish this, we performed systematic studies to determine the effects of G-gel composition and experimental conditions that will enable us to achieve our goals that include (1) preparation of ultra-high purity SWNTs from low-quality, commercial SWNT starting materials, (2) separation of MWNTs from SWNTs, (3) bulk, non-destructive solubilization of individual SWNTs in aqueous solution at high concentrations (10-100 mg/mL) without sonication or centrifugation, (4) tunable enrichment of subpopulations of the SWNTs based on metallic vs. semiconductor properties, diameter, or chirality and (5) alignment of individual SWNTs.

  12. Large Scale Simulation Platform for NODES Validation Study

    Energy Technology Data Exchange (ETDEWEB)

    Sotorrio, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Qin, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Min, L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)


    This report summarizes the Large Scale (LS) simulation platform created for the Eaton NODES project. The simulation environment consists of both wholesale market simulator and distribution simulator and includes the CAISO wholesale market model and a PG&E footprint of 25-75 feeders to validate the scalability under a scenario of 33% RPS in California with additional 17% of DERS coming from distribution and customers. The simulator can generate hourly unit commitment, 5-minute economic dispatch, and 4-second AGC regulation signals. The simulator is also capable of simulating greater than 10k individual controllable devices. Simulated DERs include water heaters, EVs, residential and light commercial HVAC/buildings, and residential-level battery storage. Feeder-level voltage regulators and capacitor banks are also simulated for feeder-level real and reactive power management and Vol/Var control.

  13. Solving large scale traveling salesman problems by chaotic neurodynamics. (United States)

    Hasegawa, Mikio; Ikeguch, Tohru; Aihara, Kazuyuki


    We propose a novel approach for solving large scale traveling salesman problems (TSPs) by chaotic dynamics. First, we realize the tabu search on a neural network, by utilizing the refractory effects as the tabu effects. Then, we extend it to a chaotic neural network version. We propose two types of chaotic searching methods, which are based on two different tabu searches. While the first one requires neurons of the order of n2 for an n-city TSP, the second one requires only n neurons. Moreover, an automatic parameter tuning method of our chaotic neural network is presented for easy application to various problems. Last, we show that our method with n neurons is applicable to large TSPs such as an 85,900-city problem and exhibits better performance than the conventional stochastic searches and the tabu searches.

  14. Unfolding large-scale online collaborative human dynamics

    CERN Document Server

    Zha, Yilong; Zhou, Changsong


    Large-scale interacting human activities underlie all social and economic phenomena, but quantitative understanding of regular patterns and mechanism is very challenging and still rare. Self-organized online collaborative activities with precise record of event timing provide unprecedented opportunity. Our empirical analysis of the history of millions of updates in Wikipedia shows a universal double power-law distribution of time intervals between consecutive updates of an article. We then propose a generic model to unfold collaborative human activities into three modules: (i) individual behavior characterized by Poissonian initiation of an action, (ii) human interaction captured by a cascading response to others with a power-law waiting time, and (iii) population growth due to increasing number of interacting individuals. This unfolding allows us to obtain analytical formula that is fully supported by the universal patterns in empirical data. Our modeling approaches reveal "simplicity" beyond complex interac...

  15. Exploiting large-scale correlations to detect continuous gravitational waves. (United States)

    Pletsch, Holger J; Allen, Bruce


    Fully coherent searches (over realistic ranges of parameter space and year-long observation times) for unknown sources of continuous gravitational waves are computationally prohibitive. Less expensive hierarchical searches divide the data into shorter segments which are analyzed coherently, then detection statistics from different segments are combined incoherently. The novel method presented here solves the long-standing problem of how best to do the incoherent combination. The optimal solution exploits large-scale parameter-space correlations in the coherent detection statistic. Application to simulated data shows dramatic sensitivity improvements compared with previously available (ad hoc) methods, increasing the spatial volume probed by more than 2 orders of magnitude at lower computational cost.

  16. Large-scale structure non-Gaussianities with modal methods (United States)

    Schmittfull, Marcel


    Relying on a separable modal expansion of the bispectrum, the implementation of a fast estimator for the full bispectrum of a 3d particle distribution is presented. The computational cost of accurate bispectrum estimation is negligible relative to simulation evolution, so the bispectrum can be used as a standard diagnostic whenever the power spectrum is evaluated. As an application, the time evolution of gravitational and primordial dark matter bispectra was measured in a large suite of N-body simulations. The bispectrum shape changes characteristically when the cosmic web becomes dominated by filaments and halos, therefore providing a quantitative probe of 3d structure formation. Our measured bispectra are determined by ~ 50 coefficients, which can be used as fitting formulae in the nonlinear regime and for non-Gaussian initial conditions. We also compare the measured bispectra with predictions from the Effective Field Theory of Large Scale Structures (EFTofLSS).

  17. Waste management concept for future large-scale nuclear power

    Energy Technology Data Exchange (ETDEWEB)

    Ganev, I.; Khacheresov, G.; Lopatkin, A.; Naumov, V.; Orlov, V.; Smirnov, V.; Tochenyi, L. [R and D Institute of Power Engineering (RDIPE), Moscow (Russian Federation)


    The concept of radwaste management is discussed as part of the general concept of a naturally safe nuclear technology for the large-scale power industry. Deterministic exclusion of reactor accidents with fuel failure allows burning MA within the main fuel, as well as I and Tc. On attaining a certain efficiency of radiochemical separation of actinides and FPs, it becomes possible to bury rad-wastes after their long-term cooling, without upsetting the natural radiation level. The paper presents the results of studies performed by RDIPE, covering the reactor proper, the radioactivity of fuel and wastes, the storage and disposal of rad-wastes, and the utilization of Sr and Cs. (authors)

  18. Statistics of Caustics in Large-Scale Structure Formation (United States)

    Feldbrugge, Job L.; Hidding, Johan; van de Weygaert, Rien


    The cosmic web is a complex spatial pattern of walls, filaments, cluster nodes and underdense void regions. It emerged through gravitational amplification from the Gaussian primordial density field. Here we infer analytical expressions for the spatial statistics of caustics in the evolving large-scale mass distribution. In our analysis, following the quasi-linear Zel'dovich formalism and confined to the 1D and 2D situation, we compute number density and correlation properties of caustics in cosmic density fields that evolve from Gaussian primordial conditions. The analysis can be straightforwardly extended to the 3D situation. We moreover, are currently extending the approach to the non-linear regime of structure formation by including higher order Lagrangian approximations and Lagrangian effective field theory.

  19. Large-Scale Quantitative Analysis of Painting Arts (United States)

    Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong


    Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images - the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances.

  20. Large-scale prediction of drug-target relationships

    DEFF Research Database (Denmark)

    Kuhn, Michael; Campillos, Mónica; González, Paula


    The rapidly increasing amount of publicly available knowledge in biology and chemistry enables scientists to revisit many open problems by the systematic integration and analysis of heterogeneous novel data. The integration of relevant data does not only allow analyses at the network level......, but also provides a more global view on drug-target relations. Here we review recent attempts to apply large-scale computational analyses to predict novel interactions of drugs and targets from molecular and cellular features. In this context, we quantify the family-dependent probability of two proteins...... to bind the same ligand as function of their sequence similarity. We finally discuss how phenotypic data could help to expand our understanding of the complex mechanisms of drug action....

  1. Large-scale Ising-machines composed of magnetic neurons (United States)

    Mizushima, Koichi; Goto, Hayato; Sato, Rie


    We propose Ising-machines composed of magnetic neurons, that is, magnetic bits in a recording track. In large-scale machines, the sizes of both neurons and synapses need to be reduced, and neat and smart connections among neurons are also required to achieve all-to-all connectivity among them. These requirements can be fulfilled by adopting magnetic recording technologies such as race-track memories and skyrmion tracks because the area of a magnetic bit is almost two orders of magnitude smaller than that of static random access memory, which has normally been used as a semiconductor neuron, and the smart connections among neurons are realized by using the read and write methods of these technologies.

  2. Geophysical mapping of complex glaciogenic large-scale structures

    DEFF Research Database (Denmark)

    Høyer, Anne-Sophie


    is required to understand the structures. In practice, however, also the applicability and costs of the methods are crucial. The SkyTEM method is very cost-effective in providing dense data sets, and it is therefore recommendable to use this method initially in mapping campaigns. For more detailed structural...... information, seismic data can profitably be acquired in certain areas of interest, preferably selected on the basis of the SkyTEM data. In areas where extremely detailed information about the near-surface is required, geoelec¬tri¬cal data (resistivity information) and ground penetrating radar data (structural......This thesis presents the main results of a four year PhD study concerning the use of geophysical data in geological mapping. The study is related to the Geocenter project, “KOMPLEKS”, which focuses on the mapping of complex, large-scale geological structures. The study area is approximately 100 km2...

  3. Structural Quality of Service in Large-Scale Networks

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup

    , telephony and data. To meet the requirements of the different applications, and to handle the increased vulnerability to failures, the ability to design robust networks providing good Quality of Service is crucial. However, most planning of large-scale networks today is ad-hoc based, leading to highly......Digitalization has created the base for co-existence and convergence in communications, leading to an increasing use of multi service networks. This is for example seen in the Fiber To The Home implementations, where a single fiber is used for virtually all means of communication, including TV...... complex networks lacking predictability and global structural properties. The thesis applies the concept of Structural Quality of Service to formulate desirable global properties, and it shows how regular graph structures can be used to obtain such properties....

  4. A large-scale evaluation of computational protein function prediction. (United States)

    Radivojac, Predrag; Clark, Wyatt T; Oron, Tal Ronnen; Schnoes, Alexandra M; Wittkop, Tobias; Sokolov, Artem; Graim, Kiley; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa; Pandey, Gaurav; Yunes, Jeffrey M; Talwalkar, Ameet S; Repo, Susanna; Souza, Michael L; Piovesan, Damiano; Casadio, Rita; Wang, Zheng; Cheng, Jianlin; Fang, Hai; Gough, Julian; Koskinen, Patrik; Törönen, Petri; Nokso-Koivisto, Jussi; Holm, Liisa; Cozzetto, Domenico; Buchan, Daniel W A; Bryson, Kevin; Jones, David T; Limaye, Bhakti; Inamdar, Harshal; Datta, Avik; Manjari, Sunitha K; Joshi, Rajendra; Chitale, Meghana; Kihara, Daisuke; Lisewski, Andreas M; Erdin, Serkan; Venner, Eric; Lichtarge, Olivier; Rentzsch, Robert; Yang, Haixuan; Romero, Alfonso E; Bhat, Prajwal; Paccanaro, Alberto; Hamp, Tobias; Kaßner, Rebecca; Seemayer, Stefan; Vicedo, Esmeralda; Schaefer, Christian; Achten, Dominik; Auer, Florian; Boehm, Ariane; Braun, Tatjana; Hecht, Maximilian; Heron, Mark; Hönigschmid, Peter; Hopf, Thomas A; Kaufmann, Stefanie; Kiening, Michael; Krompass, Denis; Landerer, Cedric; Mahlich, Yannick; Roos, Manfred; Björne, Jari; Salakoski, Tapio; Wong, Andrew; Shatkay, Hagit; Gatzmann, Fanny; Sommer, Ingolf; Wass, Mark N; Sternberg, Michael J E; Škunca, Nives; Supek, Fran; Bošnjak, Matko; Panov, Panče; Džeroski, Sašo; Šmuc, Tomislav; Kourmpetis, Yiannis A I; van Dijk, Aalt D J; ter Braak, Cajo J F; Zhou, Yuanpeng; Gong, Qingtian; Dong, Xinran; Tian, Weidong; Falda, Marco; Fontana, Paolo; Lavezzo, Enrico; Di Camillo, Barbara; Toppo, Stefano; Lan, Liang; Djuric, Nemanja; Guo, Yuhong; Vucetic, Slobodan; Bairoch, Amos; Linial, Michal; Babbitt, Patricia C; Brenner, Steven E; Orengo, Christine; Rost, Burkhard; Mooney, Sean D; Friedberg, Iddo


    Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high. Here we report the results from the first large-scale community-based critical assessment of protein function annotation (CAFA) experiment. Fifty-four methods representing the state of the art for protein function prediction were evaluated on a target set of 866 proteins from 11 organisms. Two findings stand out: (i) today's best protein function prediction algorithms substantially outperform widely used first-generation methods, with large gains on all types of targets; and (ii) although the top methods perform well enough to guide experiments, there is considerable need for improvement of currently available tools.

  5. Synchronization control for large-scale network systems

    CERN Document Server

    Wu, Yuanqing; Su, Hongye; Shi, Peng; Wu, Zheng-Guang


    This book provides recent advances in analysis and synthesis of Large-scale network systems (LSNSs) with sampled-data communication and non-identical nodes. In its first chapter of the book presents an introduction to Synchronization of LSNSs and Algebraic Graph Theory as well as an overview of recent developments of LSNSs with sampled data control or output regulation control. The main text of the book is organized into two main parts - Part I: LSNSs with sampled-data communication and Part II: LSNSs with non-identical nodes. This monograph provides up-to-date advances and some recent developments in the analysis and synthesis issues for LSNSs with sampled-data communication and non-identical nodes. It describes the constructions of the adaptive reference generators in the first stage and the robust regulators in the second stage. Examples are presented to show the effectiveness of the proposed design techniques.

  6. Coordinated SLNR based Precoding in Large-Scale Heterogeneous Networks

    KAUST Repository

    Boukhedimi, Ikram


    This work focuses on the downlink of large-scale two-tier heterogeneous networks composed of a macro-cell overlaid by micro-cell networks. Our interest is on the design of coordinated beamforming techniques that allow to mitigate the inter-cell interference. Particularly, we consider the case in which the coordinating base stations (BSs) have imperfect knowledge of the channel state information. Under this setting, we propose a regularized SLNR based precoding design in which the regularization factor is used to allow better resilience with respect to the channel estimation errors. Based on tools from random matrix theory, we provide an analytical analysis of the SINR and SLNR performances. These results are then exploited to propose a proper setting of the regularization factor. Simulation results are finally provided in order to validate our findings and to confirm the performance of the proposed precoding scheme.

  7. Performance Health Monitoring of Large-Scale Systems

    Energy Technology Data Exchange (ETDEWEB)

    Rajamony, Ram [IBM Research, Austin, TX (United States)


    This report details the progress made on the ASCR funded project Performance Health Monitoring for Large Scale Systems. A large-­scale application may not achieve its full performance potential due to degraded performance of even a single subsystem. Detecting performance faults, isolating them, and taking remedial action is critical for the scale of systems on the horizon. PHM aims to develop techniques and tools that can be used to identify and mitigate such performance problems. We accomplish this through two main aspects. The PHM framework encompasses diagnostics, system monitoring, fault isolation, and performance evaluation capabilities that indicates when a performance fault has been detected, either due to an anomaly present in the system itself or due to contention for shared resources between concurrently executing jobs. Software components called the PHM Control system then build upon the capabilities provided by the PHM framework to mitigate degradation caused by performance problems.

  8. Large scale in vivo recordings to study neuronal biophysics. (United States)

    Giocomo, Lisa M


    Over the last several years, technological advances have enabled researchers to more readily observe single-cell membrane biophysics in awake, behaving animals. Studies utilizing these technologies have provided important insights into the mechanisms generating functional neural codes in both sensory and non-sensory cortical circuits. Crucial for a deeper understanding of how membrane biophysics control circuit dynamics however, is a continued effort to move toward large scale studies of membrane biophysics, in terms of the numbers of neurons and ion channels examined. Future work faces a number of theoretical and technical challenges on this front but recent technological developments hold great promise for a larger scale understanding of how membrane biophysics contribute to circuit coding and computation. Copyright © 2014 Elsevier Ltd. All rights reserved.


    Directory of Open Access Journals (Sweden)

    Yuusuke KAWAKITA


    Full Text Available Radio Frequency Identification (RFID devices and related technologies have received a great deal of attention for their ability to perform non-contact object identification. Systems incorporating RFID have been evaluated from a variety of perspectives. The authors constructed a networked RFID system to support event management at NetWorld+Interop 2004 Tokyo, an event that received 150,000 visitors. The system used multiple RFID readers installed at the venue and RFID tags carried by each visitor to provide a platform for running various management and visitor support applications. This paper presents the results of this field trial of RFID readability rates. It further addresses the applicability of RFID systems to visitor management, a problematic aspect of large-scale events.

  10. Measuring large-scale social networks with high resolution. (United States)

    Stopczynski, Arkadiusz; Sekara, Vedran; Sapiezynski, Piotr; Cuttone, Andrea; Madsen, Mette My; Larsen, Jakob Eg; Lehmann, Sune


    This paper describes the deployment of a large-scale study designed to measure human interactions across a variety of communication channels, with high temporal resolution and spanning multiple years-the Copenhagen Networks Study. Specifically, we collect data on face-to-face interactions, telecommunication, social networks, location, and background information (personality, demographics, health, politics) for a densely connected population of 1000 individuals, using state-of-the-art smartphones as social sensors. Here we provide an overview of the related work and describe the motivation and research agenda driving the study. Additionally, the paper details the data-types measured, and the technical infrastructure in terms of both backend and phone software, as well as an outline of the deployment procedures. We document the participant privacy procedures and their underlying principles. The paper is concluded with early results from data analysis, illustrating the importance of multi-channel high-resolution approach to data collection.

  11. Measuring large-scale social networks with high resolution.

    Directory of Open Access Journals (Sweden)

    Arkadiusz Stopczynski

    Full Text Available This paper describes the deployment of a large-scale study designed to measure human interactions across a variety of communication channels, with high temporal resolution and spanning multiple years-the Copenhagen Networks Study. Specifically, we collect data on face-to-face interactions, telecommunication, social networks, location, and background information (personality, demographics, health, politics for a densely connected population of 1000 individuals, using state-of-the-art smartphones as social sensors. Here we provide an overview of the related work and describe the motivation and research agenda driving the study. Additionally, the paper details the data-types measured, and the technical infrastructure in terms of both backend and phone software, as well as an outline of the deployment procedures. We document the participant privacy procedures and their underlying principles. The paper is concluded with early results from data analysis, illustrating the importance of multi-channel high-resolution approach to data collection.

  12. Harvesting Collective Trend Observations from Large Scale Study Trips

    DEFF Research Database (Denmark)

    Eriksen, Kaare; Ovesen, Nis


    trips for engineering students in architecture & design and the results from crowd-collecting a large amount of trend observations as well as the derived experience from using the method on a large scale study trip. The method has been developed and formalized in relation to study trips with large......To enhance industrial design students’ decoding and understanding of the technological possibilities and the diversity of needs and preferences in different cultures it is not unusual to arrange study trips where such students acquire a broader view to strengthen their professional skills...... numbers of students to the annual Milan Design Week and the Milan fair ‘I Saloni’ in Italy. The present paper describes and evaluates the method, the theory behind it, the practical execution of the trend registration, the results from the activities and future perspectives....

  13. Testing Inflation with Large Scale Structure: Connecting Hopes with Reality

    Energy Technology Data Exchange (ETDEWEB)

    Alvarez, Marcello [Univ. of Toronto, ON (Canada); Baldauf, T. [Inst. of Advanced Studies, Princeton, NJ (United States); Bond, J. Richard [Univ. of Toronto, ON (Canada); Canadian Inst. for Advanced Research, Toronto, ON (Canada); Dalal, N. [Univ. of Illinois, Urbana-Champaign, IL (United States); Putter, R. D. [Jet Propulsion Lab., Pasadena, CA (United States); California Inst. of Technology (CalTech), Pasadena, CA (United States); Dore, O. [Jet Propulsion Lab., Pasadena, CA (United States); California Inst. of Technology (CalTech), Pasadena, CA (United States); Green, Daniel [Univ. of Toronto, ON (Canada); Canadian Inst. for Advanced Research, Toronto, ON (Canada); Hirata, Chris [The Ohio State Univ., Columbus, OH (United States); Huang, Zhiqi [Univ. of Toronto, ON (Canada); Huterer, Dragan [Univ. of Michigan, Ann Arbor, MI (United States); Jeong, Donghui [Pennsylvania State Univ., University Park, PA (United States); Johnson, Matthew C. [York Univ., Toronto, ON (Canada); Perimeter Inst., Waterloo, ON (Canada); Krause, Elisabeth [Stanford Univ., CA (United States); Loverde, Marilena [Univ. of Chicago, IL (United States); Meyers, Joel [Univ. of Toronto, ON (Canada); Meeburg, Daniel [Univ. of Toronto, ON (Canada); Senatore, Leonardo [Stanford Univ., CA (United States); Shandera, Sarah [Pennsylvania State Univ., University Park, PA (United States); Silverstein, Eva [Stanford Univ., CA (United States); Slosar, Anze [Brookhaven National Lab. (BNL), Upton, NY (United States); Smith, Kendrick [Perimeter Inst., Waterloo, Toronto, ON (Canada); Zaldarriaga, Matias [Univ. of Toronto, ON (Canada); Assassi, Valentin [Cambridge Univ. (United Kingdom); Braden, Jonathan [Univ. of Toronto, ON (Canada); Hajian, Amir [Univ. of Toronto, ON (Canada); Kobayashi, Takeshi [Perimeter Inst., Waterloo, Toronto, ON (Canada); Univ. of Toronto, ON (Canada); Stein, George [Univ. of Toronto, ON (Canada); Engelen, Alexander van [Univ. of Toronto, ON (Canada)


    The statistics of primordial curvature fluctuations are our window into the period of inflation, where these fluctuations were generated. To date, the cosmic microwave background has been the dominant source of information about these perturbations. Large-scale structure is, however, from where drastic improvements should originate. In this paper, we explain the theoretical motivations for pursuing such measurements and the challenges that lie ahead. In particular, we discuss and identify theoretical targets regarding the measurement of primordial non-Gaussianity. We argue that when quantified in terms of the local (equilateral) template amplitude f$loc\\atop{NL}$ (f$eq\\atop{NL}$), natural target levels of sensitivity are Δf$loc, eq\\atop{NL}$ ≃ 1. We highlight that such levels are within reach of future surveys by measuring 2-, 3- and 4-point statistics of the galaxy spatial distribution. This paper summarizes a workshop held at CITA (University of Toronto) on October 23-24, 2014.

  14. Automatic Installation and Configuration for Large Scale Farms

    CERN Document Server

    Novák, J


    Since the early appearance of commodity hardware, the utilization of computers rose rapidly, and they became essential in all areas of life. Soon it was realized that nodes are able to work cooperatively, in order to solve new, more complex tasks. This conception got materialized in coherent aggregations of computers called farms and clusters. Collective application of nodes, being efficient and economical, was adopted in education, research and industry before long. But maintainance, especially in large scale, appeared as a problem to be resolved. New challenges needed new methods and tools. Development work has been started to build farm management applications and frameworks. In the first part of the thesis, these systems are introduced. After a general description of the matter, a comparative analysis of different approaches and tools illustrates the practical aspects of the theoretical discussion. CERN, the European Organization of Nuclear Research is the largest Particle Physics laboratory in the world....

  15. Optimal Wind Energy Integration in Large-Scale Electric Grids (United States)

    Albaijat, Mohammad H.

    The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create

  16. Large-Scale Quantitative Analysis of Painting Arts (United States)

    Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong


    Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images – the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances. PMID:25501877

  17. Nuclear-pumped lasers for large-scale applications (United States)

    Anderson, R. E.; Leonard, E. M.; Shea, R. E.; Berggren, R. R.

    Efficient initiation of large-volume chemical lasers may be achieved by neutron induced reactions which produce charged particles in the final state. When a burst mode nuclear reactor is used as the neutron source, both a sufficiently intense neutron flux and a sufficient short initiation pulse may be possible. Proof-of-principle experiments are planned to demonstrate lasing in a direct nuclear-pumped large-volume system: to study the effects of various neutron absorbing materials on laser performance; to study the effects of long initiation pulse lengths; to determine the performance of large-scale optics and the beam quality that may be obtained; and to assess the performance of alternative designs of burst systems that increase the neutron output and burst repetition rate.

  18. Optimization of large-scale fabrication of dielectric elastomer transducers

    DEFF Research Database (Denmark)

    Hassouneh, Suzan Sager

    as conductive adhesives were rejected. Dielectric properties below the percolation threshold were subsequently investigated, in order to conclude the study. In order to avoid destroying the network structure, carbon nanotubes (CNTs) were used as fillers during the preparation of the conductive elastomers......Dielectric elastomers (DEs) have gained substantial ground in many different applications, such as wave energy harvesting, valves and loudspeakers. For DE technology to be commercially viable, it is necessary that any large-scale production operation is nondestructive, efficient and cheap. Danfoss...... grafted covalently to the CNT surface with poly(methacryloyl polydimethylsiloxane), resulting in the obtained conductivities being comparable to commercially available Elastosil LR3162, even at low functionalisation. The optimized methods allow new processes for the production of DE film with corrugations...

  19. Large scale intender test program to measure sub gouge displacements

    Energy Technology Data Exchange (ETDEWEB)

    Been, Ken; Lopez, Juan [Golder Associates Inc, Houston, TX (United States); Sancio, Rodolfo [MMI Engineering Inc., Houston, TX (United States)


    The production of submarine pipelines in an offshore environment covered with ice is very challenging. Several precautions must be taken such as burying the pipelines to protect them from ice movement caused by gouging. The estimation of the subgouge displacements is a key factor in pipeline design for ice gouged environments. This paper investigated a method to measure subgouge displacements. An experimental program was implemented in an open field to produce large scale idealized gouges on engineered soil beds (sand and clay). The horizontal force required to produce the gouge, the subgouge displacements in the soil and the strain imposed by these displacements were monitored on a buried model pipeline. The results showed that for a given keel, the gouge depth was inversely proportional to undrained shear strength in clay. The subgouge displacements measured did not show a relationship with the gouge depth, width or soil density in sand and clay tests.

  20. Risk Management Challenges in Large-scale Energy PSS

    DEFF Research Database (Denmark)

    Tegeltija, Miroslava; Oehmen, Josef; Kozin, Igor


    Probabilistic risk management approaches have a long tradition in engineering. A large variety of tools and techniques based on the probabilistic view of risk is available and applied in PSS practice. However, uncertainties that arise due to lack of knowledge and information are still missing...... adequate representations. We focus on a large-scale energy company in Denmark as one case of current product/servicesystems risk management best practices. We analyze their risk management process and investigate the tools they use in order to support decision making processes within the company. First, we...... identify the following challenges in the current risk management practices that are in line with literature: (1) current methods are not appropriate for the situations dominated by weak knowledge and information; (2) quality of traditional models in such situations is open to debate; (3) quality of input...

  1. Magnetic Properties of Large-Scale Nanostructured Graphene Systems

    DEFF Research Database (Denmark)

    Gregersen, Søren Schou

    The on-going progress in two-dimensional (2D) materials and nanostructure fabrication motivates the study of altered and combined materials. Graphene—the most studied material of the 2D family—displays unique electronic and spintronic properties. Exceptionally high electron mobilities, that surpass...... graphene systems using large-scale modeling techniques. Graphene perforations, or antidots, have received substantial interest in the prospect of opening large band gaps in the otherwise gapless graphene. Motivated by recent improvements of fabrication processes, such as forming graphene antidots and layer...... the graphene sublattice symmetry. We take advantage of this, and explore the fundamental features of zigzag-edged triangular graphene antidots (zz-TGAs). Their specific edge configurations give rise to highly desirable spin-filtering and spin-splitting transport features. The mechanisms behind...

  2. Entropy perturbations and large-scale magnetic fields

    Energy Technology Data Exchange (ETDEWEB)

    Giovannini, Massimo [Centro ' Enrico Fermi' , Compendio del Viminale, Via Panisperna 89/A, 00184 Rome (Italy); Department of Physics, Theory Division, CERN, 1211 Geneva 23 (Switzerland)


    An appropriate gauge-invariant framework for the treatment of magnetized curvature and entropy modes is developed. It is shown that large-scale magnetic fields, present after neutrino decoupling, affect curvature and entropy perturbations. The evolution of different magnetized modes is then studied across the matter-radiation transition both analytically and numerically. From the observation that after equality (but before decoupling) the (scalar) Sachs-Wolfe contribution must be (predominantly) adiabatic, constraints on the magnetic power spectra are deduced. The present results motivate the experimental analysis of more general initial conditions of cosmic microwave background (CMB) anisotropies (i.e. mixtures of magnetized adiabatic and isocurvature modes during the pre-decoupling phase). The role of the possible correlations between the different components of the fluctuations is partially discussed.

  3. Deep Feature Learning and Cascaded Classifier for Large Scale Data

    DEFF Research Database (Denmark)

    Prasoon, Adhish

    from data rather than having a predefined feature set. We explore deep learning approach of convolutional neural network (CNN) for segmenting three dimensional medical images. We propose a novel system integrating three 2D CNNs, which have a one-to-one association with the xy, yz and zx planes of 3D......This thesis focuses on voxel/pixel classification based approaches for image segmentation. The main application is segmentation of articular cartilage in knee MRIs. The first major contribution of the thesis deals with large scale machine learning problems. Many medical imaging problems need huge...... amount of training data to cover sufficient biological variability. Learning methods scaling badly with number of training data points cannot be used in such scenarios. This may restrict the usage of many powerful classifiers having excellent generalization ability. We propose a cascaded classifier which...

  4. Modeling of large-scale oxy-fuel combustion processes

    DEFF Research Database (Denmark)

    Yin, Chungen


    Quite some studies have been conducted in order to implement oxy-fuel combustion with flue gas recycle in conventional utility boilers as an effective effort of carbon capture and storage. However, combustion under oxy-fuel conditions is significantly different from conventional air-fuel firing......, among which radiative heat transfer under oxy-fuel conditions is one of the fundamental issues. This paper demonstrates the nongray-gas effects in modeling of large-scale oxy-fuel combustion processes. Oxy-fuel combustion of natural gas in a 609MW utility boiler is numerically studied, in which...... calculation of the oxy-fuel WSGGM remarkably over-predicts the radiative heat transfer to the furnace walls and under-predicts the gas temperature at the furnace exit plane, which also result in a higher incomplete combustion in the gray calculation. Moreover, the gray and non-gray calculations of the same...

  5. Are Critical Phenomena Relevant to Large-Scale Evolution? (United States)

    Sole, Ricard V.; Bascompte, Jordi


    Recent theoretical studies, based on the theory of self-organized critical systems, seem to suggest that the dynamical patterns of macroevolution could belong to such class of critical phenomena. Two basic approaches have been proposed: the Kauffman-Johnsen model (based on the use of coupled fitness landscapes) and the Bak-Sneppen model. Both are reviewed here. These models are oversimplified pictures of biological evolution, but the (possible) validity of them is based on the concept of universality, i.e. that apparently very different systems sharing some few common properties should also behave in a very similar way. In this paper we explore the current evidence from the fossil record, showing that some properties that are suggestive of critical dynamics would also be the result of random phenomema. Some general properties of the large-scale pattern of evolution, which should be reproduced by these models, are discussed.

  6. Retention of memory for large-scale spaces. (United States)

    Ishikawa, Toru


    This study empirically examined the retention of large-scale spatial memory, taking different types of spatial knowledge and levels of sense of direction into consideration. A total of 38 participants learned a route from a video and conducted spatial tasks immediately after learning the route and after 2 weeks or 3 months had passed. Results showed that spatial memory decayed over time, at a faster rate for the first 2-week period than for the subsequent period of up to 3 months, although it was not completely forgotten even after 3 months. The rate of forgetting differed depending on the type of knowledge, with landmark and route knowledge deteriorating at a much faster rate than survey knowledge. Sense of direction affected both the acquisition and the retention of survey knowledge. Survey knowledge by people with a good sense of direction was more accurate and decayed much less than that by people with a poor sense of direction.

  7. Tabulation Hashing for Large-Scale Data Processing

    DEFF Research Database (Denmark)

    Dahlgaard, Søren

    The past decade has brought with it an immense amount of data from large volumes of text to network traffic data.Working with such largescale data has become an increasingly important topic, giving rise to many important problems and influential solutions. One common denominator between many...... popular algorithms and data structures for tackling these problems is randomization implemented with hash functions. A common practice in the analysis of such randomized algorithms, is to work under the abstract assumption that truly random unit cost hash functions are freely available without concern...... employed as an “inner-loop” operation and evaluation time is thus of utmost importance. This thesis seeks to bridge this gap in the theory by providing efficient families of hash functions with strong theoretical guarantees for several influential problems in the large-scale data regime. This is done...

  8. Experimental Investigation of Large-Scale Bubbly Plumes

    Energy Technology Data Exchange (ETDEWEB)

    Zboray, R.; Simiano, M.; De Cachard, F


    Carefully planned and instrumented experiments under well-defined boundary conditions have been carried out on large-scale, isothermal, bubbly plumes. The data obtained is meant to validate newly developed, high-resolution numerical tools for 3D transient, two-phase flow modelling. Several measurement techniques have been utilised to collect data from the experiments: particle image velocimetry, optical probes, electromagnetic probes, and visualisation. Bubble and liquid velocity fields, void-fraction distributions, bubble size and interfacial-area-concentration distributions have all been measured in the plume region, as well as recirculation velocities in the surrounding pool. The results obtained from the different measurement techniques have been compared. In general, the two-phase flow data obtained from the different techniques are found to be consistent, and of high enough quality for validating numerical simulation tools for 3D bubbly flows. (author)

  9. JSTOR: Large Scale Digitization of Journals in the United States


    Kevin M. Guthrie


    The JSTOR database now includes well over 2 million pages from 61 important journals in 13 academic disciplines. Additional journal content is being digitized at a rate of more than 100,000 pages per month. More than 320 libraries in the United States and Canada have become participating institutions, providing support for the creation, maintenance and growth of this database. Outside of North America, we have established a mirror site in the United Kingdom. Through a novel collaborative rela...

  10. A Large Scale, High Resolution Agent-Based Insurgency Model (United States)


    evolutionary computation, such as genetic algorithms (Mitchell, 1998) may be useful. Data could also be integrated by constraining agent attributes based...civil violence: an evolutionary multi- agent, game theoretic approach. Proc. IEEE Congress on Evolutionary Computation. Vancouver, BC, Canada, July 16-21...pastoral nomad/sedentary peasant interaction. Mathematical Anthropology and Cultural Theory: An International Journal 1(4), 2005. LeGrande, S

  11. Learning from large-scale quality improvement through comparisons. (United States)

    Ovretveit, John; Klazinga, Niek


    To discover lessons from 10 national health and social care quality programmes in the Netherlands. A mixed-methods comparison using a 'quantitative summarization of evidence for systematic comparison'. Each research team assessed whether there was evidence from their evaluation to support or refute 17 hypotheses about successful implementation of quality programmes. The programme managers carried out a similar assessment. Their assessments were represented as scores which made it possible to carry out a cross-case analysis to assess factors affecting the success of large-scale quality programmes. The researchers who evaluated each of the programmes and the leaders who organized each programme. Health and social care service organizations and national organization, which led the quality improvement programmes. This study did not make an intervention but compared experiences and evaluations of interventions carried out by national organization to health and social care service organizations to help these organizations to improve their services. The success of the national programmes, and the learning achieved by the programme organizations and care service delivery organizations. The method provided a way to summarize and compare complex information. Common factors which appeared to influence success in implementation included understanding of political processes, leader's influencing skills, as well as technical skills to manage projects and apply improvement and change methods. Others could use a similar method to make a fast, broad level, but systematic comparison across reports of improvements or programmes. Descriptions, and then comparisons of the programmes, reveal common factors which appeared to influence success in implementation. There were groups of factors which appeared to be more important for the success of certain types of programmes. It is possible that these factors may also be important for the success of large-scale improvement programmes in

  12. Large-scale mass distribution in the Illustris simulation (United States)

    Haider, M.; Steinhauser, D.; Vogelsberger, M.; Genel, S.; Springel, V.; Torrey, P.; Hernquist, L.


    Observations at low redshifts thus far fail to account for all of the baryons expected in the Universe according to cosmological constraints. A large fraction of the baryons presumably resides in a thin and warm-hot medium between the galaxies, where they are difficult to observe due to their low densities and high temperatures. Cosmological simulations of structure formation can be used to verify this picture and provide quantitative predictions for the distribution of mass in different large-scale structure components. Here we study the distribution of baryons and dark matter at different epochs using data from the Illustris simulation. We identify regions of different dark matter density with the primary constituents of large-scale structure, allowing us to measure mass and volume of haloes, filaments and voids. At redshift zero, we find that 49 per cent of the dark matter and 23 per cent of the baryons are within haloes more massive than the resolution limit of 2 × 108 M⊙. The filaments of the cosmic web host a further 45 per cent of the dark matter and 46 per cent of the baryons. The remaining 31 per cent of the baryons reside in voids. The majority of these baryons have been transported there through active galactic nuclei feedback. We note that the feedback model of Illustris is too strong for heavy haloes, therefore it is likely that we are overestimating this amount. Categorizing the baryons according to their density and temperature, we find that 17.8 per cent of them are in a condensed state, 21.6 per cent are present as cold, diffuse gas, and 53.9 per cent are found in the state of a warm-hot intergalactic medium.

  13. High Fidelity Simulations of Large-Scale Wireless Networks

    Energy Technology Data Exchange (ETDEWEB)

    Onunkwo, Uzoma [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Benz, Zachary [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)


    The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulations (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.

  14. An effective method of large scale ontology matching. (United States)

    Diallo, Gayo


    We are currently facing a proliferation of heterogeneous biomedical data sources accessible through various knowledge-based applications. These data are annotated by increasingly extensive and widely disseminated knowledge organisation systems ranging from simple terminologies and structured vocabularies to formal ontologies. In order to solve the interoperability issue, which arises due to the heterogeneity of these ontologies, an alignment task is usually performed. However, while significant effort has been made to provide tools that automatically align small ontologies containing hundreds or thousands of entities, little attention has been paid to the matching of large sized ontologies in the life sciences domain. We have designed and implemented ServOMap, an effective method for large scale ontology matching. It is a fast and efficient high precision system able to perform matching of input ontologies containing hundreds of thousands of entities. The system, which was included in the 2012 and 2013 editions of the Ontology Alignment Evaluation Initiative campaign, performed very well. It was ranked among the top systems for the large ontologies matching. We proposed an approach for large scale ontology matching relying on Information Retrieval (IR) techniques and the combination of lexical and machine learning contextual similarity computing for the generation of candidate mappings. It is particularly adapted to the life sciences domain as many of the ontologies in this domain benefit from synonym terms taken from the Unified Medical Language System and that can be used by our IR strategy. The ServOMap system we implemented is able to deal with hundreds of thousands entities with an efficient computation time.

  15. Probing Inflation Using Galaxy Clustering On Ultra-Large Scales (United States)

    Dalal, Roohi; de Putter, Roland; Dore, Olivier


    A detailed understanding of curvature perturbations in the universe is necessary to constrain theories of inflation. In particular, measurements of the local non-gaussianity parameter, flocNL, enable us to distinguish between two broad classes of inflationary theories, single-field and multi-field inflation. While most single-field theories predict flocNL ≈ ‑5/12 (ns -1), in multi-field theories, flocNL is not constrained to this value and is allowed to be observably large. Achieving σ(flocNL) = 1 would give us discovery potential for detecting multi-field inflation, while finding flocNL=0 would rule out a good fraction of interesting multi-field models. We study the use of galaxy clustering on ultra-large scales to achieve this level of constraint on flocNL. Upcoming surveys such as Euclid and LSST will give us galaxy catalogs from which we can construct the galaxy power spectrum and hence infer a value of flocNL. We consider two possible methods of determining the galaxy power spectrum from a catalog of galaxy positions: the traditional Feldman Kaiser Peacock (FKP) Power Spectrum Estimator, and an Optimal Quadratic Estimator (OQE). We implemented and tested each method using mock galaxy catalogs, and compared the resulting constraints on flocNL. We find that the FKP estimator can measure flocNL in an unbiased way, but there remains room for improvement in its precision. We also find that the OQE is not computationally fast, but remains a promising option due to its ability to isolate the power spectrum at large scales. We plan to extend this research to study alternative methods, such as pixel-based likelihood functions. We also plan to study the impact of general relativistic effects at these scales on our ability to measure flocNL.

  16. Ecohydrological modeling for large-scale environmental impact assessment. (United States)

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Abouali, Mohammad; Herman, Matthew R; Esfahanian, Elaheh; Hamaamin, Yaseen A; Zhang, Zhen


    Ecohydrological models are frequently used to assess the biological integrity of unsampled streams. These models vary in complexity and scale, and their utility depends on their final application. Tradeoffs are usually made in model scale, where large-scale models are useful for determining broad impacts of human activities on biological conditions, and regional-scale (e.g. watershed or ecoregion) models provide stakeholders greater detail at the individual stream reach level. Given these tradeoffs, the objective of this study was to develop large-scale stream health models with reach level accuracy similar to regional-scale models thereby allowing for impacts assessments and improved decision-making capabilities. To accomplish this, four measures of biological integrity (Ephemeroptera, Plecoptera, and Trichoptera taxa (EPT), Family Index of Biotic Integrity (FIBI), Hilsenhoff Biotic Index (HBI), and fish Index of Biotic Integrity (IBI)) were modeled based on four thermal classes (cold, cold-transitional, cool, and warm) of streams that broadly dictate the distribution of aquatic biota in Michigan. The Soil and Water Assessment Tool (SWAT) was used to simulate streamflow and water quality in seven watersheds and the Hydrologic Index Tool was used to calculate 171 ecologically relevant flow regime variables. Unique variables were selected for each thermal class using a Bayesian variable selection method. The variables were then used in development of adaptive neuro-fuzzy inference systems (ANFIS) models of EPT, FIBI, HBI, and IBI. ANFIS model accuracy improved when accounting for stream thermal class rather than developing a global model. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Minimization of Linear Functionals Defined on| Solutions of Large-Scale Discrete Ill-Posed Problems

    DEFF Research Database (Denmark)

    Elden, Lars; Hansen, Per Christian; Rojas, Marielba


    The minimization of linear functionals de ned on the solutions of discrete ill-posed problems arises, e.g., in the computation of con dence intervals for these solutions. In 1990, Elden proposed an algorithm for this minimization problem based on a parametric-programming reformulation involving...... the solution of a sequence of trust-region problems, and using matrix factorizations. In this paper, we describe MLFIP, a large-scale version of this algorithm where a limited-memory trust-region solver is used on the subproblems. We illustrate the use of our algorithm in connection with an inverse heat...

  18. Output Regulation of Large-Scale Hydraulic Networks with Minimal Steady State Power Consumption

    DEFF Research Database (Denmark)

    Jensen, Tom Nørgaard; Wisniewski, Rafal; De Persis, Claudio


    An industrial case study involving a large-scale hydraulic network is examined. The hydraulic network underlies a district heating system, with an arbitrary number of end-users. The problem of output regulation is addressed along with a optimization criterion for the control. The fact...... that the system is overactuated is exploited for minimizing the steady state electrical power consumption of the pumps in the system, while output regulation is maintained. The proposed control actions are decentralized in order to make changes in the structure of the hydraulic network easy to implement....

  19. Assessing Programming Costs of Explicit Memory Localization on a Large Scale Shared Memory Multiprocessor

    Directory of Open Access Journals (Sweden)

    Silvio Picano


    Full Text Available We present detailed experimental work involving a commercially available large scale shared memory multiple instruction stream-multiple data stream (MIMD parallel computer having a software controlled cache coherence mechanism. To make effective use of such an architecture, the programmer is responsible for designing the program's structure to match the underlying multiprocessors capabilities. We describe the techniques used to exploit our multiprocessor (the BBN TC2000 on a network simulation program, showing the resulting performance gains and the associated programming costs. We show that an efficient implementation relies heavily on the user's ability to explicitly manage the memory system.

  20. Large scale stochastic spatio-temporal modelling with PCRaster (United States)

    Karssenberg, Derek; Drost, Niels; Schmitz, Oliver; de Jong, Kor; Bierkens, Marc F. P.


    software from the eScience Technology Platform (eSTeP), developed at the Netherlands eScience Center. This will allow us to scale up to hundreds of machines, with thousands of compute cores. A key requirement is not to change the user experience of the software. PCRaster operations and the use of the Python framework classes should work in a similar manner on machines ranging from a laptop to a supercomputer. This enables a seamless transfer of models from small machines, where model development is done, to large machines used for large-scale model runs. Domain specialists from a large range of disciplines, including hydrology, ecology, sedimentology, and land use change studies, currently use the PCRaster Python software within research projects. Applications include global scale hydrological modelling and error propagation in large-scale land use change models. The software runs on MS Windows, Linux operating systems, and OS X.

  1. Large scale solar district heating. Evaluation, modelling and designing

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.


    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the tool for design studies and on a local energy planning case. The evaluation of the central solar heating technology is based on measurements on the case plant in Marstal, Denmark, and on published and unpublished data for other, mainly Danish, CSDHP plants. Evaluations on the thermal, economical and environmental performances are reported, based on the experiences from the last decade. The measurements from the Marstal case are analysed, experiences extracted and minor improvements to the plant design proposed. For the detailed designing and energy planning of CSDHPs, a computer simulation model is developed and validated on the measurements from the Marstal case. The final model is then generalised to a 'generic' model for CSDHPs in general. The meteorological reference data, Danish Reference Year, is applied to find the mean performance for the plant designs. To find the expectable variety of the thermal performance of such plants, a method is proposed where data from a year with poor solar irradiation and a year with strong solar irradiation are applied. Equipped with a simulation tool design studies are carried out spreading from parameter analysis over energy planning for a new settlement to a proposal for the combination of plane solar collectors with high performance solar collectors, exemplified by a trough solar collector. The methodology of utilising computer simulation proved to be a cheap and relevant tool in the design of future solar heating plants. The thesis also exposed the demand for developing computer models for the more advanced solar collector designs and especially for the control operation of CSHPs. In the final chapter the CSHP technology is put into perspective with respect to other possible technologies to find the relevance of the application

  2. Using interpreted large scale aerial photo data to enhance satellite-based mapping and explore forest land definitions (United States)

    Tracey S. Frescino; Gretchen G. Moisen


    The Interior-West, Forest Inventory and Analysis (FIA), Nevada Photo-Based Inventory Pilot (NPIP), launched in 2004, involved acquisition, processing, and interpretation of large scale aerial photographs on a subset of FIA plots (both forest and nonforest) throughout the state of Nevada. Two objectives of the pilot were to use the interpreted photo data to enhance...

  3. Energy from the desert very large scale PV power : state of the art and into the future

    CERN Document Server

    Komoto, Keiichi; Cunow, Edwin; Megherbi, Karim; Faiman, David; van der Vleuten, Peter


    The fourth volume in the established Energy from the Desert series examines and evaluates the potential and feasibility of Very Large Scale Photovoltaic Power Generation (VLS-PV) systems, which have capacities ranging from several megawatts to gigawatts, and to develop practical project proposals toward implementing the VLS-PV systems in the future. It comprehensively analyses all major issues involved in such large scale applications, based on the latest scientific and technological developments by means of close international co-operation with experts from different countries. From t

  4. Contextual Compression of Large-Scale Wind Turbine Array Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Potter, Kristin C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Clyne, John [National Center for Atmospheric Research (NCAR)


    Data sizes are becoming a critical issue particularly for HPC applications. We have developed a user-driven lossy wavelet-based storage model to facilitate the analysis and visualization of large-scale wind turbine array simulations. The model stores data as heterogeneous blocks of wavelet coefficients, providing high-fidelity access to user-defined data regions believed the most salient, while providing lower-fidelity access to less salient regions on a block-by-block basis. In practice, by retaining the wavelet coefficients as a function of feature saliency, we have seen data reductions in excess of 94 percent, while retaining lossless information in the turbine-wake regions most critical to analysis and providing enough (low-fidelity) contextual information in the upper atmosphere to track incoming coherent turbulent structures. Our contextual wavelet compression approach has allowed us to deliver interative visual analysis while providing the user control over where data loss, and thus reduction in accuracy, in the analysis occurs. We argue this reduced but contextualized representation is a valid approach and encourages contextual data management.

  5. Large-scale parallel genome assembler over cloud computing environment. (United States)

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong


    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  6. The large-scale properties of simulated cosmological magnetic fields (United States)

    Marinacci, Federico; Vogelsberger, Mark; Mocz, Philip; Pakmor, Rüdiger


    We perform uniformly sampled large-scale cosmological simulations including magnetic fields with the moving mesh code AREPO. We run two sets of MHD simulations: one including adiabatic gas physics only; the other featuring the fiducial feedback model of the Illustris simulation. In the adiabatic case, the magnetic field amplification follows the B ∝ ρ2/3 scaling derived from `flux-freezing' arguments, with the seed field strength providing an overall normalization factor. At high baryon overdensities the amplification is enhanced by shear flows and turbulence. Feedback physics and the inclusion of radiative cooling change this picture dramatically. In haloes, gas collapses to much larger densities and the magnetic field is amplified strongly and to the same maximum intensity irrespective of the initial seed field of which any memory is lost. At lower densities a dependence on the seed field strength and orientation, which in principle can be used to constrain models of cosmic magnetogenesis, is still present. Inside the most massive haloes magnetic fields reach values of ˜ 10-100 μG, in agreement with galaxy cluster observations. The topology of the field is tangled and gives rise to rotation measure signals in reasonable agreement with the observations. However, the rotation measure signal declines too rapidly towards larger radii as compared to observational data.

  7. Developing Large Scale Explosively Driven Flyer Experiments on Sand (United States)

    Rehagen, Thomas; Kraus, Richard


    Measurements of the dynamic behavior of granular materials are of great importance to a variety of scientific and engineering applications, including planetary science, seismology, and construction and destruction. In addition, high quality data are needed to enhance our understanding of granular physics and improve the computational models used to simulate related physical processes. However, since there is a non-negligible grain size associated with these materials, experiments must be of a relatively large scale in order to capture the continuum response of the material and reduce errors associated with the finite grain size. We will present designs for explosively driven flyer experiments to make high accuracy measurements of the Hugoniot of sand (with a grain size of hundreds of microns). To achieve an accuracy of better than a few percent in density, we are developing a platform to measure the Hugoniot of samples several centimeters in thickness. We will present the target designs as well as coupled designs for the explosively launched flyer system. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344.

  8. Believability in simplifications of large scale physically based simulation

    KAUST Repository

    Han, Donghui


    We verify two hypotheses which are assumed to be true only intuitively in many rigid body simulations. I: In large scale rigid body simulation, viewers may not be able to perceive distortion incurred by an approximated simulation method. II: Fixing objects under a pile of objects does not affect the visual plausibility. Visual plausibility of scenarios simulated with these hypotheses assumed true are measured using subjective rating from viewers. As expected, analysis of results supports the truthfulness of the hypotheses under certain simulation environments. However, our analysis discovered four factors which may affect the authenticity of these hypotheses: number of collisions simulated simultaneously, homogeneity of colliding object pairs, distance from scene under simulation to camera position, and simulation method used. We also try to find an objective metric of visual plausibility from eye-tracking data collected from viewers. Analysis of these results indicates that eye-tracking does not present a suitable proxy for measuring plausibility or distinguishing between types of simulations. © 2013 ACM.

  9. Nanomaterials processing toward large-scale flexible/stretchable electronics (United States)

    Takahashi, Toshitake

    In recent years, there has been tremendous progress in large-scale mechanically flexible electronics, where electrical components are fabricated on non-crystalline substrates such as plastics and glass. These devices are currently serving as the basis for various applications such as flat-panel displays, smart cards, and wearable electronics. In this thesis, a promising approach using chemically synthesized nanomaterials is explored to overcome various obstacles current technology faces in this field. Here, we use chemically synthesized semiconducting nanowires (NWs) including group IV (Si, Ge), III-V (InAs) and II-IV (CdS, CdSe) NWs, and semiconductor-enriched SWNTs (99 % purity), and developed reliable, controllable, and more importantly uniform assembly methods on 4-inch wafer-scale flexible substrates in the form of either parallel NW arrays or SWNT random networks, which act as the active components in thin film transistors (TFTs). Thusly obtained TFTs composed of nanomaterials show respectable electrical and optical properties such as 1) cut-off frequency, ft ~ 1 GHz and maximum frequency of oscillation, fmax ~ 1.8 GHz from InAs parallel NW array TFTs with channel length of ~ 1.5 μm, 2) photodetectors covering visible wavelengths (500-700 nm) using compositionally graded CdSxSe1-x (0 x-ray imaging device is also achieved by combining organic photodiodes with this backplane technology.

  10. Static micromixers based on large-scale industrial mixer geometry. (United States)

    Bertsch, A; Heimgartner, S; Cousseau, P; Renaud, P


    Mixing liquids at the micro-scale is difficult because the low Reynolds numbers in microchannels and in microreactors prohibit the use of conventional mixing techniques based on mechanical actuators and induce turbulence. Static mixers can be used to solve this mixing problem. This paper presents micromixers with geometries very close to conventional large-scale static mixers used in the chemical and food-processing industry. Two kinds of geometries have been studied. The first type is composed of a series of stationary rigid elements that form intersecting channels to split, rearrange and combine component streams. The second type is composed of a series of short helix elements arranged in pairs, each pair comprised of a right-handed and left-handed element arranged alternately in a pipe. Micromixers of both types have been designed by CAD and manufactured with the integral microstereolithography process, a new microfabrication technique that allows the manufacturing of complex three-dimensional objects in polymers. The realized mixers have been tested experimentally. Numerical simulations of these micromixers using the computational fluid dynamics (CFD) program FLUENT are used to evaluate the mixing efficiency. With a low pressure drop and good mixing efficiency these truly three-dimensional micromixers can be used for mixing of reactants or liquids containing cells in many microTAS applications.

  11. Countercurrent tangential chromatography for large-scale protein purification. (United States)

    Shinkazh, Oleg; Kanani, Dharmesh; Barth, Morgan; Long, Matthew; Hussain, Daniar; Zydney, Andrew L


    Recent advances in cell culture technology have created significant pressure on the downstream purification process, leading to a "downstream bottleneck" in the production of recombinant therapeutic proteins for the treatment of cancer, genetic disorders, and cardiovascular disease. Countercurrent tangential chromatography overcomes many of the limitations of conventional column chromatography by having the resin (in the form of a slurry) flow through a series of static mixers and hollow fiber membrane modules. The buffers used in the binding, washing, and elution steps flow countercurrent to the resin, enabling high-resolution separations while reducing the amount of buffer needed for protein purification. The results obtained in this study provide the first experimental demonstration of the feasibility of using countercurrent tangential chromatography for the separation of a model protein mixture containing bovine serum albumin and myoglobin using a commercially available anion exchange resin. Batch uptake/desorption experiments were used in combination with critical flux data for the hollow fiber filters to design the countercurrent tangential chromatography system. A two-stage batch separation yielded the purified target protein at >99% purity with 94% recovery. The results clearly demonstrate the potential of using countercurrent tangential chromatography for the large-scale purification of therapeutic proteins. Copyright © 2010 Wiley Periodicals, Inc.

  12. Open TG-GATEs: a large-scale toxicogenomics database (United States)

    Igarashi, Yoshinobu; Nakatsu, Noriyuki; Yamashita, Tomoya; Ono, Atsushi; Ohno, Yasuo; Urushidani, Tetsuro; Yamada, Hiroshi


    Toxicogenomics focuses on assessing the safety of compounds using gene expression profiles. Gene expression signatures from large toxicogenomics databases are expected to perform better than small databases in identifying biomarkers for the prediction and evaluation of drug safety based on a compound's toxicological mechanisms in animal target organs. Over the past 10 years, the Japanese Toxicogenomics Project consortium (TGP) has been developing a large-scale toxicogenomics database consisting of data from 170 compounds (mostly drugs) with the aim of improving and enhancing drug safety assessment. Most of the data generated by the project (e.g. gene expression, pathology, lot number) are freely available to the public via Open TG-GATEs (Toxicogenomics Project-Genomics Assisted Toxicity Evaluation System). Here, we provide a comprehensive overview of the database, including both gene expression data and metadata, with a description of experimental conditions and procedures used to generate the database. Open TG-GATEs is available from PMID:25313160

  13. Large Scale Obscuration and Related Climate Effects Workshop: Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Zak, B.D.; Russell, N.A.; Church, H.W.; Einfeld, W.; Yoon, D.; Behl, Y.K. [eds.


    A Workshop on Large Scale Obsurcation and Related Climate Effects was held 29--31 January, 1992, in Albuquerque, New Mexico. The objectives of the workshop were: to determine through the use of expert judgement the current state of understanding of regional and global obscuration and related climate effects associated with nuclear weapons detonations; to estimate how large the uncertainties are in the parameters associated with these phenomena (given specific scenarios); to evaluate the impact of these uncertainties on obscuration predictions; and to develop an approach for the prioritization of further work on newly-available data sets to reduce the uncertainties. The workshop consisted of formal presentations by the 35 participants, and subsequent topical working sessions on: the source term; aerosol optical properties; atmospheric processes; and electro-optical systems performance and climatic impacts. Summaries of the conclusions reached in the working sessions are presented in the body of the report. Copies of the transparencies shown as part of each formal presentation are contained in the appendices (microfiche).

  14. Stouffer's test in a large scale simultaneous hypothesis testing.

    Directory of Open Access Journals (Sweden)

    Sang Cheol Kim

    Full Text Available In microarray data analysis, we are often required to combine several dependent partial test results. To overcome this, many suggestions have been made in previous literature; Tippett's test and Fisher's omnibus test are most popular. Both tests have known null distributions when the partial tests are independent. However, for dependent tests, their (even, asymptotic null distributions are unknown and additional numerical procedures are required. In this paper, we revisited Stouffer's test based on z-scores and showed its advantage over the two aforementioned methods in the analysis of large-scale microarray data. The combined statistic in Stouffer's test has a normal distribution with mean 0 from the normality of the z-scores. Its variance can be estimated from the scores of genes in the experiment without an additional numerical procedure. We numerically compared the errors of Stouffer's test and the two p-value based methods, Tippett's test and Fisher's omnibus test. We also analyzed our microarray data to find differentially expressed genes by non-genotoxic and genotoxic carcinogen compounds. Both numerical study and the real application showed that Stouffer's test performed better than Tippett's method and Fisher's omnibus method with additional permutation steps.

  15. Large scale high strain-rate tests of concrete

    Directory of Open Access Journals (Sweden)

    Kiefer R.


    Full Text Available This work presents the stages of development of some innovative equipment, based on Hopkinson bar techniques, for performing large scale dynamic tests of concrete specimens. The activity is centered at the recently upgraded HOPLAB facility, which is basically a split Hopkinson bar with a total length of approximately 200 m and with bar diameters of 72 mm. Through pre-tensioning and suddenly releasing a steel cable, force pulses of up to 2 MN, 250 μs rise time and 40 ms duration can be generated and applied to the specimen tested. The dynamic compression loading has first been treated and several modifications in the basic configuration have been introduced. Twin incident and transmitter bars have been installed with strong steel plates at their ends where large specimens can be accommodated. A series of calibration and qualification tests has been conducted and the first real tests on concrete cylindrical specimens of 20cm diameter and up to 40cm length have commenced. Preliminary results from the analysis of the recorded signals indicate proper Hopkinson bar testing conditions and reliable functioning of the facility.

  16. Large-Scale Multiantenna Multisine Wireless Power Transfer (United States)

    Huang, Yang; Clerckx, Bruno


    Wireless Power Transfer (WPT) is expected to be a technology reshaping the landscape of low-power applications such as the Internet of Things, Radio Frequency identification (RFID) networks, etc. Although there has been some progress towards multi-antenna multi-sine WPT design, the large-scale design of WPT, reminiscent of massive MIMO in communications, remains an open challenge. In this paper, we derive efficient multiuser algorithms based on a generalizable optimization framework, in order to design transmit sinewaves that maximize the weighted-sum/minimum rectenna output DC voltage. The study highlights the significant effect of the nonlinearity introduced by the rectification process on the design of waveforms in multiuser systems. Interestingly, in the single-user case, the optimal spatial domain beamforming, obtained prior to the frequency domain power allocation optimization, turns out to be Maximum Ratio Transmission (MRT). In contrast, in the general weighted sum criterion maximization problem, the spatial domain beamforming optimization and the frequency domain power allocation optimization are coupled. Assuming channel hardening, low-complexity algorithms are proposed based on asymptotic analysis, to maximize the two criteria. The structure of the asymptotically optimal spatial domain precoder can be found prior to the optimization. The performance of the proposed algorithms is evaluated. Numerical results confirm the inefficiency of the linear model-based design for the single and multi-user scenarios. It is also shown that as nonlinear model-based designs, the proposed algorithms can benefit from an increasing number of sinewaves.

  17. How large-scale subsidence affects stratocumulus transitions

    Directory of Open Access Journals (Sweden)

    J. J. van der Dussen


    Full Text Available Some climate modeling results suggest that the Hadley circulation might weaken in a future climate, causing a subsequent reduction in the large-scale subsidence velocity in the subtropics. In this study we analyze the cloud liquid water path (LWP budget from large-eddy simulation (LES results of three idealized stratocumulus transition cases, each with a different subsidence rate. As shown in previous studies a reduced subsidence is found to lead to a deeper stratocumulus-topped boundary layer, an enhanced cloud-top entrainment rate and a delay in the transition of stratocumulus clouds into shallow cumulus clouds during its equatorwards advection by the prevailing trade winds. The effect of a reduction of the subsidence rate can be summarized as follows. The initial deepening of the stratocumulus layer is partly counteracted by an enhanced absorption of solar radiation. After some hours the deepening of the boundary layer is accelerated by an enhancement of the entrainment rate. Because this is accompanied by a change in the cloud-base turbulent fluxes of moisture and heat, the net change in the LWP due to changes in the turbulent flux profiles is negligibly small.

  18. Research on large-scale wind farm modeling (United States)

    Ma, Longfei; Zhang, Baoqun; Gong, Cheng; Jiao, Ran; Shi, Rui; Chi, Zhongjun; Ding, Yifeng


    Due to intermittent and adulatory properties of wind energy, when large-scale wind farm connected to the grid, it will have much impact on the power system, which is different from traditional power plants. Therefore it is necessary to establish an effective wind farm model to simulate and analyze the influence wind farms have on the grid as well as the transient characteristics of the wind turbines when the grid is at fault. However we must first establish an effective WTGs model. As the doubly-fed VSCF wind turbine has become the mainstream wind turbine model currently, this article first investigates the research progress of doubly-fed VSCF wind turbine, and then describes the detailed building process of the model. After that investigating the common wind farm modeling methods and pointing out the problems encountered. As WAMS is widely used in the power system, which makes online parameter identification of the wind farm model based on off-output characteristics of wind farm be possible, with a focus on interpretation of the new idea of identification-based modeling of large wind farms, which can be realized by two concrete methods.

  19. Planck intermediate results. XLII. Large-scale Galactic magnetic fields (United States)

    Planck Collaboration; Adam, R.; Ade, P. A. R.; Alves, M. I. R.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Chiang, H. C.; Christensen, P. R.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Dolag, K.; Doré, O.; Ducout, A.; Dupac, X.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Ferrière, K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Galeotta, S.; Ganga, K.; Ghosh, T.; Giard, M.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Harrison, D. L.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hobson, M.; Hornstrup, A.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leahy, J. P.; Leonardi, R.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; Melchiorri, A.; Mennella, A.; Migliaccio, M.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Nørgaard-Nielsen, H. U.; Oppermann, N.; Orlando, E.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Pasian, F.; Perotto, L.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Pratt, G. W.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Scott, D.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Strong, A. W.; Sudiwala, R.; Sunyaev, R.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Valenziano, L.; Valiviita, J.; Van Tent, F.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zonca, A.


    Recent models for the large-scale Galactic magnetic fields in the literature have been largely constrained by synchrotron emission and Faraday rotation measures. We use three different but representative models to compare their predicted polarized synchrotron and dust emission with that measured by the Planck satellite. We first update these models to match the Planck synchrotron products using a common model for the cosmic-ray leptons. We discuss the impact on this analysis of the ongoing problems of component separation in the Planck microwave bands and of the uncertain cosmic-ray spectrum. In particular, the inferred degree of ordering in the magnetic fields is sensitive to these systematic uncertainties, and we further show the importance of considering the expected variations in the observables in addition to their mean morphology. We then compare the resulting simulated emission to the observed dust polarization and find that the dust predictions do not match the morphology in the Planck data but underpredict the dust polarization away from the plane. We modify one of the models to roughly match both observables at high latitudes by increasing the field ordering in the thin disc near the observer. Though this specific analysis is dependent on the component separation issues, we present the improved model as a proof of concept for how these studies can be advanced in future using complementary information from ongoing and planned observational projects.

  20. Challenges for large scale ab initio Quantum Monte Carlo (United States)

    Kent, Paul


    Ab initio Quantum Monte Carlo is an electronic structure method that is highly accurate, well suited to large scale computation, and potentially systematically improvable in accuracy. Due to increases in computer power, the method has been applied to systems where established electronic structure methods have difficulty reaching the accuracies desired to inform experiment without empiricism, a necessary step in the design of materials and a helpful step in the improvement of cheaper and less accurate methods. Recent applications include accurate phase diagrams of simple materials through to phenomena in transition metal oxides. Nevertheless there remain significant challenges to achieving a methodology that is robust and systematically improvable in practice, as well as capable of exploiting the latest generation of high-performance computers. In this talk I will describe the current state of the art, recent applications, and several significant challenges for continued improvement. Supported through the Predictive Theory and Modeling for Materials and Chemical Science program by the Office of Basic Energy Sciences (BES), Department of Energy (DOE).

  1. Comprehensive large-scale assessment of intrinsic protein disorder. (United States)

    Walsh, Ian; Giollo, Manuel; Di Domenico, Tomás; Ferrari, Carlo; Zimmermann, Olav; Tosatto, Silvio C E


    Intrinsically disordered regions are key for the function of numerous proteins. Due to the difficulties in experimental disorder characterization, many computational predictors have been developed with various disorder flavors. Their performance is generally measured on small sets mainly from experimentally solved structures, e.g. Protein Data Bank (PDB) chains. MobiDB has only recently started to collect disorder annotations from multiple experimental structures. MobiDB annotates disorder for UniProt sequences, allowing us to conduct the first large-scale assessment of fast disorder predictors on 25 833 different sequences with X-ray crystallographic structures. In addition to a comprehensive ranking of predictors, this analysis produced the following interesting observations. (i) The predictors cluster according to their disorder definition, with a consensus giving more confidence. (ii) Previous assessments appear over-reliant on data annotated at the PDB chain level and performance is lower on entire UniProt sequences. (iii) Long disordered regions are harder to predict. (iv) Depending on the structural and functional types of the proteins, differences in prediction performance of up to 10% are observed. The datasets are available from Web site at URL: Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail:

  2. Large Scale Software Building with CMake in ATLAS (United States)

    Elmsheuser, J.; Krasznahorkay, A.; Obreshkov, E.; Undrus, A.; ATLAS Collaboration


    The offline software of the ATLAS experiment at the Large Hadron Collider (LHC) serves as the platform for detector data reconstruction, simulation and analysis. It is also used in the detector’s trigger system to select LHC collision events during data taking. The ATLAS offline software consists of several million lines of C++ and Python code organized in a modular design of more than 2000 specialized packages. Because of different workflows, many stable numbered releases are in parallel production use. To accommodate specific workflow requests, software patches with modified libraries are distributed on top of existing software releases on a daily basis. The different ATLAS software applications also require a flexible build system that strongly supports unit and integration tests. Within the last year this build system was migrated to CMake. A CMake configuration has been developed that allows one to easily set up and build the above mentioned software packages. This also makes it possible to develop and test new and modified packages on top of existing releases. The system also allows one to detect and execute partial rebuilds of the release based on single package changes. The build system makes use of CPack for building RPM packages out of the software releases, and CTest for running unit and integration tests. We report on the migration and integration of the ATLAS software to CMake and show working examples of this large scale project in production.

  3. Large-scale assembly bias of dark matter halos (United States)

    Lazeyras, Titouan; Musso, Marcello; Schmidt, Fabian


    We present precise measurements of the assembly bias of dark matter halos, i.e. the dependence of halo bias on other properties than the mass, using curved "separate universe" N-body simulations which effectively incorporate an infinite-wavelength matter overdensity into the background density. This method measures the LIMD (local-in-matter-density) bias parameters bn in the large-scale limit. We focus on the dependence of the first two Eulerian biases bE1 and bE2 on four halo properties: the concentration, spin, mass accretion rate, and ellipticity. We quantitatively compare our results with previous works in which assembly bias was measured on fairly small scales. Despite this difference, our findings are in good agreement with previous results. We also look at the joint dependence of bias on two halo properties in addition to the mass. Finally, using the excursion set peaks model, we attempt to shed new insights on how assembly bias arises in this analytical model.

  4. A large-scale chromosome-specific SNP discovery guideline. (United States)

    Akpinar, Bala Ani; Lucas, Stuart; Budak, Hikmet


    Single-nucleotide polymorphisms (SNPs) are the most prevalent type of variation in genomes that are increasingly being used as molecular markers in diversity analyses, mapping and cloning of genes, and germplasm characterization. However, only a few studies reported large-scale SNP discovery in Aegilops tauschii, restricting their potential use as markers for the low-polymorphic D genome. Here, we report 68,592 SNPs found on the gene-related sequences of the 5D chromosome of Ae. tauschii genotype MvGB589 using genomic and transcriptomic sequences from seven Ae. tauschii accessions, including AL8/78, the only genotype for which a draft genome sequence is available at present. We also suggest a workflow to compare SNP positions in homologous regions on the 5D chromosome of Triticum aestivum, bread wheat, to mark single nucleotide variations between these closely related species. Overall, the identified SNPs define a density of 4.49 SNPs per kilobyte, among the highest reported for the genic regions of Ae. tauschii so far. To our knowledge, this study also presents the first chromosome-specific SNP catalog in Ae. tauschii that should facilitate the association of these SNPs with morphological traits on chromosome 5D to be ultimately targeted for wheat improvement.

  5. Parallel Framework for Dimensionality Reduction of Large-Scale Datasets

    Directory of Open Access Journals (Sweden)

    Sai Kiranmayee Samudrala


    Full Text Available Dimensionality reduction refers to a set of mathematical techniques used to reduce complexity of the original high-dimensional data, while preserving its selected properties. Improvements in simulation strategies and experimental data collection methods are resulting in a deluge of heterogeneous and high-dimensional data, which often makes dimensionality reduction the only viable way to gain qualitative and quantitative understanding of the data. However, existing dimensionality reduction software often does not scale to datasets arising in real-life applications, which may consist of thousands of points with millions of dimensions. In this paper, we propose a parallel framework for dimensionality reduction of large-scale data. We identify key components underlying the spectral dimensionality reduction techniques, and propose their efficient parallel implementation. We show that the resulting framework can be used to process datasets consisting of millions of points when executed on a 16,000-core cluster, which is beyond the reach of currently available methods. To further demonstrate applicability of our framework we perform dimensionality reduction of 75,000 images representing morphology evolution during manufacturing of organic solar cells in order to identify how processing parameters affect morphology evolution.

  6. Heterogeneous Graph Propagation for Large-Scale Web Image Search. (United States)

    Xie, Lingxi; Tian, Qi; Zhou, Wengang; Zhang, Bo


    State-of-the-art web image search frameworks are often based on the bag-of-visual-words (BoVWs) model and the inverted index structure. Despite the simplicity, efficiency, and scalability, they often suffer from low precision and/or recall, due to the limited stability of local features and the considerable information loss on the quantization stage. To refine the quality of retrieved images, various postprocessing methods have been adopted after the initial search process. In this paper, we investigate the online querying process from a graph-based perspective. We introduce a heterogeneous graph model containing both image and feature nodes explicitly, and propose an efficient reranking approach consisting of two successive modules, i.e., incremental query expansion and image-feature voting, to improve the recall and precision, respectively. Compared with the conventional reranking algorithms, our method does not require using geometric information of visual words, therefore enjoys low consumptions of both time and memory. Moreover, our method is independent of the initial search process, and could cooperate with many BoVW-based image search pipelines, or adopted after other postprocessing algorithms. We evaluate our approach on large-scale image search tasks and verify its competitive search performance.

  7. Fast large-scale object retrieval with binary quantization (United States)

    Zhou, Shifu; Zeng, Dan; Shen, Wei; Zhang, Zhijiang; Tian, Qi


    The objective of large-scale object retrieval systems is to search for images that contain the target object in an image database. Where state-of-the-art approaches rely on global image representations to conduct searches, we consider many boxes per image as candidates to search locally in a picture. In this paper, a feature quantization algorithm called binary quantization is proposed. In binary quantization, a scale-invariant feature transform (SIFT) feature is quantized into a descriptive and discriminative bit-vector, which allows itself to adapt to the classic inverted file structure for box indexing. The inverted file, which stores the bit-vector and box ID where the SIFT feature is located inside, is compact and can be loaded into the main memory for efficient box indexing. We evaluate our approach on available object retrieval datasets. Experimental results demonstrate that the proposed approach is fast and achieves excellent search quality. Therefore, the proposed approach is an improvement over state-of-the-art approaches for object retrieval.

  8. Large-scale model of mammalian thalamocortical systems. (United States)

    Izhikevich, Eugene M; Edelman, Gerald M


    The understanding of the structural and dynamic complexity of mammalian brains is greatly facilitated by computer simulations. We present here a detailed large-scale thalamocortical model based on experimental measures in several mammalian species. The model spans three anatomical scales. (i) It is based on global (white-matter) thalamocortical anatomy obtained by means of diffusion tensor imaging (DTI) of a human brain. (ii) It includes multiple thalamic nuclei and six-layered cortical microcircuitry based on in vitro labeling and three-dimensional reconstruction of single neurons of cat visual cortex. (iii) It has 22 basic types of neurons with appropriate laminar distribution of their branching dendritic trees. The model simulates one million multicompartmental spiking neurons calibrated to reproduce known types of responses recorded in vitro in rats. It has almost half a billion synapses with appropriate receptor kinetics, short-term plasticity, and long-term dendritic spike-timing-dependent synaptic plasticity (dendritic STDP). The model exhibits behavioral regimes of normal brain activity that were not explicitly built-in but emerged spontaneously as the result of interactions among anatomical and dynamic processes. We describe spontaneous activity, sensitivity to changes in individual neurons, emergence of waves and rhythms, and functional connectivity on different scales.

  9. Large-scale comparative visualisation of sets of multidimensional data

    Directory of Open Access Journals (Sweden)

    Dany Vohl


    Full Text Available We present encube—a qualitative, quantitative and comparative visualisation and analysis system, with application to high-resolution, immersive three-dimensional environments and desktop displays. encube extends previous comparative visualisation systems by considering: (1 the integration of comparative visualisation and analysis into a unified system; (2 the documentation of the discovery process; and (3 an approach that enables scientists to continue the research process once back at their desktop. Our solution enables tablets, smartphones or laptops to be used as interaction units for manipulating, organising, and querying data. We highlight the modularity of encube, allowing additional functionalities to be included as required. Additionally, our approach supports a high level of collaboration within the physical environment. We show how our implementation of encube operates in a large-scale, hybrid visualisation and supercomputing environment using the CAVE2 at Monash University, and on a local desktop, making it a versatile solution. We discuss how our approach can help accelerate the discovery rate in a variety of research scenarios.

  10. Thermal activation of dislocations in large scale obstacle bypass (United States)

    Sobie, Cameron; Capolungo, Laurent; McDowell, David L.; Martinez, Enrique


    Dislocation dynamics simulations have been used extensively to predict hardening caused by dislocation-obstacle interactions, including irradiation defect hardening in the athermal case. Incorporating the role of thermal energy on these interactions is possible with a framework provided by harmonic transition state theory (HTST) enabling direct access to thermally activated reaction rates using the Arrhenius equation, including rates of dislocation-obstacle bypass processes. Moving beyond unit dislocation-defect reactions to a representative environment containing a large number of defects requires coarse-graining the activation energy barriers of a population of obstacles into an effective energy barrier that accurately represents the large scale collective process. The work presented here investigates the relationship between unit dislocation-defect bypass processes and the distribution of activation energy barriers calculated for ensemble bypass processes. A significant difference between these cases is observed, which is attributed to the inherent cooperative nature of dislocation bypass processes. In addition to the dislocation-defect interaction, the morphology of the dislocation segments pinned to the defects play an important role on the activation energies for bypass. A phenomenological model for activation energy stress dependence is shown to describe well the effect of a distribution of activation energies, and a probabilistic activation energy model incorporating the stress distribution in a material is presented.

  11. Weighted social networks for a large scale artificial society (United States)

    Fan, Zong Chen; Duan, Wei; Zhang, Peng; Qiu, Xiao Gang


    The method of artificial society has provided a powerful way to study and explain how individual behaviors at micro level give rise to the emergence of global social phenomenon. It also creates the need for an appropriate representation of social structure which usually has a significant influence on human behaviors. It has been widely acknowledged that social networks are the main paradigm to describe social structure and reflect social relationships within a population. To generate social networks for a population of interest, considering physical distance and social distance among people, we propose a generation model of social networks for a large-scale artificial society based on human choice behavior theory under the principle of random utility maximization. As a premise, we first build an artificial society through constructing a synthetic population with a series of attributes in line with the statistical (census) data for Beijing. Then the generation model is applied to assign social relationships to each individual in the synthetic population. Compared with previous empirical findings, the results show that our model can reproduce the general characteristics of social networks, such as high clustering coefficient, significant community structure and small-world property. Our model can also be extended to a larger social micro-simulation as an input initial. It will facilitate to research and predict some social phenomenon or issues, for example, epidemic transition and rumor spreading.

  12. Large-scale direct shear testing of municipal solid waste. (United States)

    Zekkos, Dimitrios; Athanasopoulos, George A; Bray, Jonathan D; Grizi, Athena; Theodoratos, Andreas


    Large direct shear testing (300 mm x 300 mm box) of municipal solid waste (MSW) collected from a landfill located in the San Francisco Bay area was performed to gain insight on the shear response of MSW. The study investigated the effects of waste composition, confining stress, unit weight, and loading rate on the stress-displacement response and shear strength of MSW. The amount and orientation of the fibrous waste materials in the MSW were found to play a critical role. The fibrous material had little effect on the MSW's strength when it was oriented parallel to the shear surface, as is typically the case when waste material is compressed vertically and then tested in a direct shear apparatus. Tests in which the fibrous material was oriented perpendicular to the horizontal shear surface produced significantly stronger MSW specimens. The test results indicate that confining stress and loading rate are also important factors. Based on 109 large-scale direct shear tests, the shear strength of MSW at low moisture contents is best characterized by cohesion=15 kPa, friction angle=36 degrees at a normal stress of 1 atmosphere, and a decrease in the friction angle of 5 degrees for every log-cycle increase in normal stress. 2010 Elsevier Ltd. All rights reserved.

  13. Large Scale Risks from Agricultural Pesticides in Small Streams. (United States)

    Szöcs, Eduard; Brinke, Marvin; Karaoglan, Bilgin; Schäfer, Ralf B


    Small streams are important refuges for biodiversity. In agricultural areas, they may be at risk from pesticide pollution. However, most related studies have been limited to a few streams on the regional level, hampering extrapolation to larger scales. We quantified risks as exceedances of regulatory acceptable concentrations (RACs) and used German monitoring data to quantify the drivers thereof and to assess current risks in small streams on a large scale. The data set was comprised of 1 766 104 measurements of 478 pesticides (including metabolites) related to 24 743 samples from 2301 sampling sites. We investigated the influence of agricultural land use, catchment size, as well as precipitation and seasonal dynamics on pesticide risk taking also concentrations below the limit of quantification into account. The exceedances of risk thresholds dropped 3.7-fold at sites with no agriculture. Precipitation increased detection probability by 43%, and concentrations were the highest from April to June. Overall, this indicates that agricultural land use is a major contributor of pesticides in streams. RACs were exceeded in 26% of streams, with the highest exceedances found for neonicotinoid insecticides. We conclude that pesticides from agricultural land use are a major threat to small streams and their biodiversity. To reflect peak concentrations, current pesticide monitoring needs refinement.

  14. Boundary element method solution for large scale cathodic protection problems (United States)

    Rodopoulos, D. C.; Gortsas, T. V.; Tsinopoulos, S. V.; Polyzos, D.


    Cathodic protection techniques are widely used for avoiding corrosion sequences in offshore structures. The Boundary Element Method (BEM) is an ideal method for solving such problems because requires only the meshing of the boundary and not the whole domain of the electrolyte as the Finite Element Method does. This advantage becomes more pronounced in cathodic protection systems since electrochemical reactions occur mainly on the surface of the metallic structure. The present work aims to solve numerically a sacrificial cathodic protection problem for a large offshore platform. The solution of that large-scale problem is accomplished by means of “PITHIA Software” a BEM package enhanced by Hierarchical Matrices (HM) and Adaptive Cross Approximation (ACA) techniques that accelerate drastically the computations and reduce memory requirements. The nonlinear polarization curves for steel and aluminium in seawater are employed as boundary condition for the under protection metallic surfaces and aluminium anodes, respectively. The potential as well as the current density at all the surface of the platform are effectively evaluated and presented.

  15. Flat-Land Large-Scale Electricity Storage (FLES

    Directory of Open Access Journals (Sweden)

    Schalij R.


    Full Text Available Growth of renewable sources requires a smarter electricity grid, integrating multiple solutions for large scale storage. Pumped storage still is the most valid option. The capacity of existing facilities is not sufficient to accommodate future renewable resources. New locations for additional pumped storage capacity are scarce. Mountainous areas mostly are remote and do not allow construction of large facilities for ecological reasons. In the Netherlands underground solutions were studied for many years. The use of (former coal mines was rejected after scientific research. Further research showed that solid rock formations below the (unstable coal layers can be harnessed to excavate the lower water reservoir for pumped storage, making an innovative underground solution possible. A complete plan was developed, with a capacity of 1400 MW (8 GWh daily output and a head of 1400 m. It is technically and economically feasible. Compared to conventional pumped storage it has significantly less impact on the environment. Less vulnerable locations are eligible. The reservoir on the surface (only one instead of two is relatively small. It offers also a solution for other European countries. The Dutch studies provide a valuable basis for new locations.

  16. Large scale electromechanical transistor with application in mass sensing

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Leisheng; Li, Lijie, E-mail: [Multidisciplinary Nanotechnology Centre, College of Engineering, Swansea University, Swansea SA2 8PP (United Kingdom)


    Nanomechanical transistor (NMT) has evolved from the single electron transistor, a device that operates by shuttling electrons with a self-excited central conductor. The unfavoured aspects of the NMT are the complexity of the fabrication process and its signal processing unit, which could potentially be overcome by designing much larger devices. This paper reports a new design of large scale electromechanical transistor (LSEMT), still taking advantage of the principle of shuttling electrons. However, because of the large size, nonlinear electrostatic forces induced by the transistor itself are not sufficient to drive the mechanical member into vibration—an external force has to be used. In this paper, a LSEMT device is modelled, and its new application in mass sensing is postulated using two coupled mechanical cantilevers, with one of them being embedded in the transistor. The sensor is capable of detecting added mass using the eigenstate shifts method by reading the change of electrical current from the transistor, which has much higher sensitivity than conventional eigenfrequency shift approach used in classical cantilever based mass sensors. Numerical simulations are conducted to investigate the performance of the mass sensor.

  17. HTS cables open the window for large-scale renewables (United States)

    Geschiere, A.; Willén, D.; Piga, E.; Barendregt, P.


    In a realistic approach to future energy consumption, the effects of sustainable power sources and the effects of growing welfare with increased use of electricity need to be considered. These factors lead to an increased transfer of electric energy over the networks. A dominant part of the energy need will come from expanded large-scale renewable sources. To use them efficiently over Europe, large energy transits between different countries are required. Bottlenecks in the existing infrastructure will be avoided by strengthening the network. For environmental reasons more infrastructure will be built underground. Nuon is studying the HTS technology as a component to solve these challenges. This technology offers a tremendously large power transport capacity as well as the possibility to reduce short circuit currents, making integration of renewables easier. Furthermore, power transport will be possible at lower voltage levels, giving the opportunity to upgrade the existing network while re-using it. This will result in large cost savings while reaching the future energy challenges. In a 6 km backbone structure in Amsterdam Nuon wants to install a 50 kV HTS Triax cable for a significant increase of the transport capacity, while developing its capabilities. Nevertheless several barriers have to be overcome.

  18. FRVT 2006 and ICE 2006 large-scale experimental results. (United States)

    Phillips, P Jonathon; Scruggs, W Todd; O'Toole, Alice J; Flynn, Patrick J; Bowyer, Kevin W; Schott, Cathy L; Sharpe, Matthew


    This paper describes the large-scale experimental results from the Face Recognition Vendor Test (FRVT) 2006 and the Iris Challenge Evaluation (ICE) 2006. The FRVT 2006 looked at recognition from high-resolution still frontal face images and 3D face images, and measured performance for still frontal face images taken under controlled and uncontrolled illumination. The ICE 2006 evaluation reported verification performance for both left and right irises. The images in the ICE 2006 intentionally represent a broader range of quality than the ICE 2006 sensor would normally acquire. This includes images that did not pass the quality control software embedded in the sensor. The FRVT 2006 results from controlled still and 3D images document at least an order-of-magnitude improvement in recognition performance over the FRVT 2002. The FRVT 2006 and the ICE 2006 compared recognition performance from high-resolution still frontal face images, 3D face images, and the single-iris images. On the FRVT 2006 and the ICE 2006 data sets, recognition performance was comparable for high-resolution frontal face, 3D face, and the iris images. In an experiment comparing human and algorithms on matching face identity across changes in illumination on frontal face images, the best performing algorithms were more accurate than humans on unfamiliar faces.

  19. Comparison of sample preparation techniques for large-scale proteomics. (United States)

    Kuljanin, Miljan; Dieters-Castator, Dylan Z; Hess, David A; Postovit, Lynne-Marie; Lajoie, Gilles A


    Numerous workflows exist for large-scale bottom-up proteomics, many of which achieve exceptional proteome depth. Herein, we evaluated the performance of several commonly used sample preparation techniques for proteomic characterization of HeLa lysates [unfractionated in-solution digests, SDS-PAGE coupled with in-gel digestion, gel-eluted liquid fraction entrapment electrophoresis (GELFrEE) technology, SCX StageTips and high-/low-pH reversed phase fractionation (HpH)]. HpH fractionation was found to be superior in terms of proteome depth (>8400 proteins detected) and fractionation efficiency compared to other techniques. SCX StageTip fractionation required minimal sample handling and was also a substantial improvement over SDS-PAGE separation and GELFrEE technology. Sequence coverage of the HeLa proteome increased to 38% when combining all workflows, however, total proteins detected improved only slightly to 8710. In summary, HpH fractionation and SCX StageTips are robust techniques and highly suited for complex proteome analysis. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Enabling High Performance Large Scale Dense Problems through KBLAS

    KAUST Repository

    Abdelfattah, Ahmad


    KBLAS (KAUST BLAS) is a small library that provides highly optimized BLAS routines on systems accelerated with GPUs. KBLAS is entirely written in CUDA C, and targets NVIDIA GPUs with compute capability 2.0 (Fermi) or higher. The current focus is on level-2 BLAS routines, namely the general matrix vector multiplication (GEMV) kernel, and the symmetric/hermitian matrix vector multiplication (SYMV/HEMV) kernel. KBLAS provides these two kernels in all four precisions (s, d, c, and z), with support to multi-GPU systems. Through advanced optimization techniques that target latency hiding and pushing memory bandwidth to the limit, KBLAS outperforms state-of-the-art kernels by 20-90% improvement. Competitors include CUBLAS-5.5, MAGMABLAS-1.4.0, and CULAR17. The SYMV/HEMV kernel from KBLAS has been adopted by NVIDIA, and should appear in CUBLAS-6.0. KBLAS has been used in large scale simulations of multi-object adaptive optics.

  1. Large-scale sequence analyses of Atlantic cod. (United States)

    Johansen, Steinar D; Coucheron, Dag H; Andreassen, Morten; Karlsen, Bård Ove; Furmanek, Tomasz; Jørgensen, Tor Erik; Emblem, Ase; Breines, Ragna; Nordeide, Jarle T; Moum, Truls; Nederbragt, Alexander J; Stenseth, Nils C; Jakobsen, Kjetill S


    The Atlantic cod (Gadus morhua) is a key species in the North Atlantic ecosystem and commercial fisheries, with increasing aquacultural production in several countries. A Norwegian effort to sequence the complete 0.9Gbp genome by the 454 pyrosequencing technology has been initiated and is in progress. Here we review recent progress in large-scale sequence analyses of the nuclear genome, the mitochondrial genome and genome-wide microRNA identification in the Atlantic cod. The nuclear genome will be de novo sequenced with 25 times oversampling. A total of 120 mitochondrial genomes, sampled from several locations in the North Atlantic, are being completely sequenced by Sanger technology in a high-throughput pipeline. These sequences will be included in a new database for maternal marker reference of Atlantic cod diversity. High-throughput 454 sequencing, as well as Evolutionary Image Array (EvoArray) informatics, is used to investigate the complete set of expressed microRNAs and corresponding mRNA targets in various developmental stages and tissues. Information about microRNA profiles will be essential in the understanding of transcriptome complexity and regulation. Finally, developments and perspectives of Atlantic cod aquaculture are discussed in the light of next-generation high-throughput sequence technologies.

  2. Decentralization, stabilization, and estimation of large-scale linear systems (United States)

    Siljak, D. D.; Vukcevic, M. B.


    In this short paper we consider three closely related aspects of large-scale systems: decentralization, stabilization, and estimation. A method is proposed to decompose a large linear system into a number of interconnected subsystems with decentralized (scalar) inputs or outputs. The procedure is preliminary to the hierarchic stabilization and estimation of linear systems and is performed on the subsystem level. A multilevel control scheme based upon the decomposition-aggregation method is developed for stabilization of input-decentralized linear systems Local linear feedback controllers are used to stabilize each decoupled subsystem, while global linear feedback controllers are utilized to minimize the coupling effect among the subsystems. Systems stabilized by the method have a tolerance to a wide class of nonlinearities in subsystem coupling and high reliability with respect to structural perturbations. The proposed output-decentralization and stabilization schemes can be used directly to construct asymptotic state estimators for large linear systems on the subsystem level. The problem of dimensionality is resolved by constructing a number of low-order estimators, thus avoiding a design of a single estimator for the overall system.

  3. Large-scale variations of M/L

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, Y.; Shaham, J.; Shaviv, G.


    The large-scale behavior of M/L is suggested to be related to segregation between dark and luminous matter (galaxies). It is suggested that during the collapse and virialization of groups and clusters the galactic halos are tidally disrupted and a smooth background of dark matter is formed; galaxies moving through this background lose energy due to dynamical friction, which is the main process considered here. We assume that at the epoch of galaxy formation (M/L) had some universal, scale independent, value, (M/L)/sub infinity/. As systems of galaxies evolved, the energy in their luminous component was dissipated. That component became denser and consequently the observed M/L decreased. (M/L)/(M/L)/sub infinity/ depends on the (age of the universe/dissipation time scale) ratio, which is found to be smaller than 1 for all relevant parameters. The main predictions of the present model are: (i) M/L is a nondecreasing function of R, and it reaches its limiting value, (M/L)/sub infinity/, on the scale of superclusters; (ii) 0.4< or =..cap omega../sub 0/< or =0.5 and -1.5< or =n< or =-1.0, where n is the index of the power spectrum of the primordial isothermal density perturbations. Because of ambiguities in data and simplification in the model, prediction (ii) may be less accruate than prediction (i).

  4. Multiple Spectral Components in Large-Scale Jets (United States)

    Meyer, Eileen; Georganopoulos, Markos; Petropoulou, Maria; Breiding, Peter


    One of the most striking discoveries of the Chandra X-ray Observatory is the population of bright X-ray emitting jets hosted by powerful quasars. Most of these jets show hard X-ray spectra which requires a separate spectral component compared with the radio-optical synchrotron emission, which usually peaks at or before the infrared. Though the origin of this high-energy spectral component has been a matter of debate for nearly two decades, it is still not understood, with major implications for our understanding of particle acceleration in jets, as well as the total energy carried by them. Until recently the prevailing interpretation for the second component has been inverse-Compont upscattering of the CMB by a still highly relativistic jet at kpc scales. I will briefly describe the recent work calling the IC/CMB model into serious question (including X-ray variability, UV polarization, gamma-ray upper limits, and proper motions), and present new results, based on new ALMA, HST, and Chandra observations, which suggest that more than two distinct spectral components may be present in some large-scale jets, and that these multiple components appear to arise in jets across the full range in jet power, and not just in the most powerful sources. These results are very difficult to reconcile with simple models of jet emission, and I will discuss these failures and some possible directions for the future, including hadronic models.

  5. Storm induced large scale TIDs observed in GPS derived TEC

    Directory of Open Access Journals (Sweden)

    C. Borries


    Full Text Available This work is a first statistical analysis of large scale traveling ionospheric disturbances (LSTID in Europe using total electron content (TEC data derived from GNSS measurements. The GNSS receiver network in Europe is dense enough to map the ionospheric perturbation TEC with high horizontal resolution. The derived perturbation TEC maps are analysed studying the effect of space weather events on the ionosphere over Europe. Equatorward propagating storm induced wave packets have been identified during several geomagnetic storms. Characteristic parameters such as velocity, wavelength and direction were estimated from the perturbation TEC maps. Showing a mean wavelength of 2000 km, a mean period of 59 min and a phase speed of 684 ms−1 in average, the perturbations are allocated to LSTID. The comparison to LSTID observed over Japan shows an equal wavelength but a considerably faster phase speed. This might be attributed to the differences in the distance to the auroral region or inclination/declination of the geomagnetic field lines. The observed correlation between the LSTID amplitudes and the Auroral Electrojet (AE indicates that most of the wave like perturbations are exited by Joule heating. Particle precipitation effects could not be separated.

  6. Storm induced large scale TIDs observed in GPS derived TEC

    Directory of Open Access Journals (Sweden)

    C. Borries


    Full Text Available This work is a first statistical analysis of large scale traveling ionospheric disturbances (LSTID in Europe using total electron content (TEC data derived from GNSS measurements. The GNSS receiver network in Europe is dense enough to map the ionospheric perturbation TEC with high horizontal resolution. The derived perturbation TEC maps are analysed studying the effect of space weather events on the ionosphere over Europe.

    Equatorward propagating storm induced wave packets have been identified during several geomagnetic storms. Characteristic parameters such as velocity, wavelength and direction were estimated from the perturbation TEC maps. Showing a mean wavelength of 2000 km, a mean period of 59 min and a phase speed of 684 ms−1 in average, the perturbations are allocated to LSTID. The comparison to LSTID observed over Japan shows an equal wavelength but a considerably faster phase speed. This might be attributed to the differences in the distance to the auroral region or inclination/declination of the geomagnetic field lines.

    The observed correlation between the LSTID amplitudes and the Auroral Electrojet (AE indicates that most of the wave like perturbations are exited by Joule heating. Particle precipitation effects could not be separated.

  7. Large scale structures in liquid crystal/clay colloids (United States)

    van Duijneveldt, Jeroen S.; Klein, Susanne; Leach, Edward; Pizzey, Claire; Richardson, Robert M.


    Suspensions of three different clays in K15, a thermotropic liquid crystal, have been studied by optical microscopy and small angle x-ray scattering. The three clays were claytone AF, a surface treated natural montmorillonite, laponite RD, a synthetic hectorite, and mined sepiolite. The claytone and laponite were sterically stabilized whereas sepiolite formed a relatively stable suspension in K15 without any surface treatment. Micrographs of the different suspensions revealed that all three suspensions contained large scale structures. The nature of these aggregates was investigated using small angle x-ray scattering. For the clays with sheet-like particles, claytone and laponite, the flocs contain a mixture of stacked and single platelets. The basal spacing in the stacks was independent of particle concentration in the suspension and the phase of the solvent. The number of platelets in the stack and their percentage in the suspension varied with concentration and the aspect ratio of the platelets. The lath shaped sepiolite did not show any tendency to organize into ordered structures. Here the aggregates are networks of randomly oriented single rods.

  8. Impact of large scale flows on turbulent transport

    Energy Technology Data Exchange (ETDEWEB)

    Sarazin, Y [Association Euratom-CEA, CEA/DSM/DRFC centre de Cadarache, 13108 St-Paul-Lez-Durance (France); Grandgirard, V [Association Euratom-CEA, CEA/DSM/DRFC centre de Cadarache, 13108 St-Paul-Lez-Durance (France); Dif-Pradalier, G [Association Euratom-CEA, CEA/DSM/DRFC centre de Cadarache, 13108 St-Paul-Lez-Durance (France); Fleurence, E [Association Euratom-CEA, CEA/DSM/DRFC centre de Cadarache, 13108 St-Paul-Lez-Durance (France); Garbet, X [Association Euratom-CEA, CEA/DSM/DRFC centre de Cadarache, 13108 St-Paul-Lez-Durance (France); Ghendrih, Ph [Association Euratom-CEA, CEA/DSM/DRFC centre de Cadarache, 13108 St-Paul-Lez-Durance (France); Bertrand, P [LPMIA-Universite Henri Poincare Nancy I, Boulevard des Aiguillettes BP239, 54506 Vandoe uvre-les-Nancy (France); Besse, N [LPMIA-Universite Henri Poincare Nancy I, Boulevard des Aiguillettes BP239, 54506 Vandoe uvre-les-Nancy (France); Crouseilles, N [IRMA, UMR 7501 CNRS/Universite Louis Pasteur, 7 rue Rene Descartes, 67084 Strasbourg (France); Sonnendruecker, E [IRMA, UMR 7501 CNRS/Universite Louis Pasteur, 7 rue Rene Descartes, 67084 Strasbourg (France); Latu, G [LSIIT, UMR 7005 CNRS/Universite Louis Pasteur, Bd Sebastien Brant BP10413, 67412 Illkirch (France); Violard, E [LSIIT, UMR 7005 CNRS/Universite Louis Pasteur, Bd Sebastien Brant BP10413, 67412 Illkirch (France)


    The impact of large scale flows on turbulent transport in magnetized plasmas is explored by means of various kinetic models. Zonal flows are found to lead to a non-linear upshift of turbulent transport in a 3D kinetic model for interchange turbulence. Such a transition is absent from fluid simulations, performed with the same numerical tool, which also predict a much larger transport. The discrepancy cannot be explained by zonal flows only, despite they being overdamped in fluids. Indeed, some difference remains, although reduced, when they are artificially suppressed. Zonal flows are also reported to trigger transport barriers in a 4D drift-kinetic model for slab ion temperature gradient (ITG) turbulence. The density gradient acts as a source drive for zonal flows, while their curvature back stabilizes the turbulence. Finally, 5D simulations of toroidal ITG modes with the global and full-f GYSELA code require the equilibrium density function to depend on the motion invariants only. If not, the generated strong mean flows can completely quench turbulent transport.

  9. Performance Assessment of a Large Scale Pulsejet- Driven Ejector System (United States)

    Paxson, Daniel E.; Litke, Paul J.; Schauer, Frederick R.; Bradley, Royce P.; Hoke, John L.


    Unsteady thrust augmentation was measured on a large scale driver/ejector system. A 72 in. long, 6.5 in. diameter, 100 lb(sub f) pulsejet was tested with a series of straight, cylindrical ejectors of varying length, and diameter. A tapered ejector configuration of varying length was also tested. The objectives of the testing were to determine the dimensions of the ejectors which maximize thrust augmentation, and to compare the dimensions and augmentation levels so obtained with those of other, similarly maximized, but smaller scale systems on which much of the recent unsteady ejector thrust augmentation studies have been performed. An augmentation level of 1.71 was achieved with the cylindrical ejector configuration and 1.81 with the tapered ejector configuration. These levels are consistent with, but slightly lower than the highest levels achieved with the smaller systems. The ejector diameter yielding maximum augmentation was 2.46 times the diameter of the pulsejet. This ratio closely matches those of the small scale experiments. For the straight ejector, the length yielding maximum augmentation was 10 times the diameter of the pulsejet. This was also nearly the same as the small scale experiments. Testing procedures are described, as are the parametric variations in ejector geometry. Results are discussed in terms of their implications for general scaling of pulsed thrust ejector systems

  10. Fast voltage stability assessment for large-scale power systems

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Y. [Shandong Univ., Jinan (China). School of Electrical Engineering; Wang, L.; Yu, Z. [Shandong Electric Power Co., Jinan (China). Electric Power Control Center


    A new method of assessing online voltage stability in large-scale power systems was presented. A local voltage stability index was used to determine weak buses in the system. A case study of the Shandong power system in China was used to demonstrate the accuracy and speed of the method for online applications. The local method was based on the fact that the Thevenin equivalent as determined from the load bus and the apparent load impedance were equal at the point of voltage collapse. Participant buses and key power sources of both reactive and active power transmission paths were determined using electrical distance measurements. Participant buses and key power sources of the reactive and active power transmission paths were also determined. The case study demonstrated that the reactive power reserve of key generators has a significant impact on voltage stability. The study also demonstrated that the voltage stability of the weakest power transmission path can decline or shift when some generators reach their limits. It was concluded that combining voltage stability indices and reactive power reserves increases the accuracy of voltage stability assessments. 11 refs., 3 tabs., 6 figs.


    Directory of Open Access Journals (Sweden)

    E. Panidi


    Full Text Available In our study we estimate relationships between quantitative parameters of relief, soil runoff regime, and spatial distribution of radioactive pollutants in the soil. The study is conducted on the test arable area located in basin of the upper Oka River (Orel region, Russia. Previously we collected rich amount of soil samples, which make it possible to investigate redistribution of the Chernobyl-origin cesium-137 in soil material and as a consequence the soil runoff magnitude at sampling points. Currently we are describing and discussing the technique applied to large-scale mapping of the soil runoff. The technique is based upon the cesium-137 radioactivity measurement in the different relief structures. Key stages are the allocation of the places for soil sampling points (we used very high resolution space imagery as a supporting data; soil samples collection and analysis; calibration of the mathematical model (using the estimated background value of the cesium-137 radioactivity; and automated compilation of the map (predictive map of the studied territory (digital elevation model is used for this purpose, and cesium-137 radioactivity can be predicted using quantitative parameters of the relief. The maps can be used as a support data for precision agriculture and for recultivation or melioration purposes.

  12. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, C; Abdulla, G; Critchlow, T


    This paper discusses using the wavelets modeling technique as a mechanism for querying large-scale spatio-temporal scientific simulation data. Wavelets have been used successfully in time series analysis and in answering surprise and trend queries. Our approach however is driven by the need for compression, which is necessary for viable throughput given the size of the targeted data, along with the end user requirements from the discovery process. Our users would like to run fast queries to check the validity of the simulation algorithms used. In some cases users are welling to accept approximate results if the answer comes back within a reasonable time. In other cases they might want to identify a certain phenomena and track it over time. We face a unique problem because of the data set sizes. It may take months to generate one set of the targeted data; because of its shear size, the data cannot be stored on disk for long and thus needs to be analyzed immediately before it is sent to tape. We integrated wavelets within AQSIM, a system that we are developing to support exploration and analyses of tera-scale size data sets. We will discuss the way we utilized wavelets decomposition in our domain to facilitate compression and in answering a specific class of queries that is harder to answer with any other modeling technique. We will also discuss some of the shortcomings of our implementation and how to address them.

  13. Large Scale Synthesis of Carbon Nanofibres on Sodium Chloride Support

    Directory of Open Access Journals (Sweden)

    Ravindra Rajarao


    Full Text Available Large scale synthesis of carbon nanofibres (CNFs on a sodium chloride support has been achieved. CNFs have been synthesized using metal oxalate (Ni, Co and Fe as catalyst precursors at 680 C by chemical vapour deposition method. Upon pyrolysis, this catalyst precursors yield catalyst nanoparticles directly. The sodium chloride was used as a catalyst support, it was chosen because of its non‐toxic and water soluble nature. Problems, such as the detrimental effect of CNFs, the detrimental effects on the environment and even cost, have been avoided by using a water soluble support. The structure of products was characterized by scanning electron microscopy, transmission electron microscopy and Raman spectroscopy. The purity of the grown products and purified products were determined by the thermal analysis and X‐ray diffraction method. Here we report the 7600, 7000 and 6500 wt% yield of CNFs synthesized over nickel, cobalt and iron oxalate. The long, curved and worm shaped CNFs were obtained on Ni, Co and Fe catalysts respectively. The lengthy process of calcination and reduction for the preparation of catalysts is avoided in this method. This synthesis route is simple and economical, hence, it can be used for CNF synthesis in industries.

  14. FFTLasso: Large-Scale LASSO in the Fourier Domain

    KAUST Repository

    Bibi, Adel Aamer


    In this paper, we revisit the LASSO sparse representation problem, which has been studied and used in a variety of different areas, ranging from signal processing and information theory to computer vision and machine learning. In the vision community, it found its way into many important applications, including face recognition, tracking, super resolution, image denoising, to name a few. Despite advances in efficient sparse algorithms, solving large-scale LASSO problems remains a challenge. To circumvent this difficulty, people tend to downsample and subsample the problem (e.g. via dimensionality reduction) to maintain a manageable sized LASSO, which usually comes at the cost of losing solution accuracy. This paper proposes a novel circulant reformulation of the LASSO that lifts the problem to a higher dimension, where ADMM can be efficiently applied to its dual form. Because of this lifting, all optimization variables are updated using only basic element-wise operations, the most computationally expensive of which is a 1D FFT. In this way, there is no need for a linear system solver nor matrix-vector multiplication. Since all operations in our FFTLasso method are element-wise, the subproblems are completely independent and can be trivially parallelized (e.g. on a GPU). The attractive computational properties of FFTLasso are verified by extensive experiments on synthetic and real data and on the face recognition task. They demonstrate that FFTLasso scales much more effectively than a state-of-the-art solver.

  15. Large-Scale NASA Science Applications on the Columbia Supercluster (United States)

    Brooks, Walter


    Columbia, NASA's newest 61 teraflops supercomputer that became operational late last year, is a highly integrated Altix cluster of 10,240 processors, and was named to honor the crew of the Space Shuttle lost in early 2003. Constructed in just four months, Columbia increased NASA's computing capability ten-fold, and revitalized the Agency's high-end computing efforts. Significant cutting-edge science and engineering simulations in the areas of space and Earth sciences, as well as aeronautics and space operations, are already occurring on this largest operational Linux supercomputer, demonstrating its capacity and capability to accelerate NASA's space exploration vision. The presentation will describe how an integrated environment consisting not only of next-generation systems, but also modeling and simulation, high-speed networking, parallel performance optimization, and advanced data analysis and visualization, is being used to reduce design cycle time, accelerate scientific discovery, conduct parametric analysis of multiple scenarios, and enhance safety during the life cycle of NASA missions. The talk will conclude by discussing how NAS partnered with various NASA centers, other government agencies, computer industry, and academia, to create a national resource in large-scale modeling and simulation.

  16. Large-scale functional purification of recombinant HIV-1 capsid.

    Directory of Open Access Journals (Sweden)

    Magdeleine Hung

    Full Text Available During human immunodeficiency virus type-1 (HIV-1 virion maturation, capsid proteins undergo a major rearrangement to form a conical core that protects the viral nucleoprotein complexes. Mutations in the capsid sequence that alter the stability of the capsid core are deleterious to viral infectivity and replication. Recently, capsid assembly has become an attractive target for the development of a new generation of anti-retroviral agents. Drug screening efforts and subsequent structural and mechanistic studies require gram quantities of active, homogeneous and pure protein. Conventional means of laboratory purification of Escherichia coli expressed recombinant capsid protein rely on column chromatography steps that are not amenable to large-scale production. Here we present a function-based purification of wild-type and quadruple mutant capsid proteins, which relies on the inherent propensity of capsid protein to polymerize and depolymerize. This method does not require the packing of sizable chromatography columns and can generate double-digit gram quantities of functionally and biochemically well-behaved proteins with greater than 98% purity. We have used the purified capsid protein to characterize two known assembly inhibitors in our in-house developed polymerization assay and to measure their binding affinities. Our capsid purification procedure provides a robust method for purifying large quantities of a key protein in the HIV-1 life cycle, facilitating identification of the next generation anti-HIV agents.

  17. Staghorn: An Automated Large-Scale Distributed System Analysis Platform

    Energy Technology Data Exchange (ETDEWEB)

    Gabert, Kasimir [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Burns, Ian [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Elliott, Steven [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kallaher, Jenna [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vail, Adam [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)


    Conducting experiments on large-scale distributed computing systems is becoming significantly easier with the assistance of emulation. Researchers can now create a model of a distributed computing environment and then generate a virtual, laboratory copy of the entire system composed of potentially thousands of virtual machines, switches, and software. The use of real software, running at clock rate in full virtual machines, allows experiments to produce meaningful results without necessitating a full understanding of all model components. However, the ability to inspect and modify elements within these models is bound by the limitation that such modifications must compete with the model, either running in or alongside it. This inhibits entire classes of analyses from being conducted upon these models. We developed a mechanism to snapshot an entire emulation-based model as it is running. This allows us to \\freeze time" and subsequently fork execution, replay execution, modify arbitrary parts of the model, or deeply explore the model. This snapshot includes capturing packets in transit and other input/output state along with the running virtual machines. We were able to build this system in Linux using Open vSwitch and Kernel Virtual Machines on top of Sandia's emulation platform Firewheel. This primitive opens the door to numerous subsequent analyses on models, including state space exploration, debugging distributed systems, performance optimizations, improved training environments, and improved experiment repeatability.

  18. LEMON - LHC Era Monitoring for Large-Scale Infrastructures (United States)

    Marian, Babik; Ivan, Fedorko; Nicholas, Hook; Hector, Lansdale Thomas; Daniel, Lenkes; Miroslav, Siket; Denis, Waldron


    At the present time computer centres are facing a massive rise in virtualization and cloud computing as these solutions bring advantages to service providers and consolidate the computer centre resources. However, as a result the monitoring complexity is increasing. Computer centre management requires not only to monitor servers, network equipment and associated software but also to collect additional environment and facilities data (e.g. temperature, power consumption, cooling efficiency, etc.) to have also a good overview of the infrastructure performance. The LHC Era Monitoring (Lemon) system is addressing these requirements for a very large scale infrastructure. The Lemon agent that collects data on every client and forwards the samples to the central measurement repository provides a flexible interface that allows rapid development of new sensors. The system allows also to report on behalf of remote devices such as switches and power supplies. Online and historical data can be visualized via a web-based interface or retrieved via command-line tools. The Lemon Alarm System component can be used for notifying the operator about error situations. In this article, an overview of the Lemon monitoring is provided together with a description of the CERN LEMON production instance. No direct comparison is made with other monitoring tool.

  19. Parallel Index and Query for Large Scale Data Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chou, Jerry; Wu, Kesheng; Ruebel, Oliver; Howison, Mark; Qiang, Ji; Prabhat,; Austin, Brian; Bethel, E. Wes; Ryne, Rob D.; Shoshani, Arie


    Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing of a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.

  20. Bayesian Inversion for Large Scale Antarctic Ice Sheet Flow

    KAUST Repository

    Ghattas, Omar


    The flow of ice from the interior of polar ice sheets is the primary contributor to projected sea level rise. One of the main difficulties faced in modeling ice sheet flow is the uncertain spatially-varying Robin boundary condition that describes the resistance to sliding at the base of the ice. Satellite observations of the surface ice flow velocity, along with a model of ice as a creeping incompressible shear-thinning fluid, can be used to infer this uncertain basal boundary condition. We cast this ill-posed inverse problem in the framework of Bayesian inference, which allows us to infer not only the basal sliding parameters, but also the associated uncertainty. To overcome the prohibitive nature of Bayesian methods for large-scale inverse problems, we exploit the fact that, despite the large size of observational data, they typically provide only sparse information on model parameters. We show results for Bayesian inversion of the basal sliding parameter field for the full Antarctic continent, and demonstrate that the work required to solve the inverse problem, measured in number of forward (and adjoint) ice sheet model solves, is independent of the parameter and data dimensions

  1. Large-scale exploration and analysis of drug combinations. (United States)

    Li, Peng; Huang, Chao; Fu, Yingxue; Wang, Jinan; Wu, Ziyin; Ru, Jinlong; Zheng, Chunli; Guo, Zihu; Chen, Xuetong; Zhou, Wei; Zhang, Wenjuan; Li, Yan; Chen, Jianxin; Lu, Aiping; Wang, Yonghua


    Drug combinations are a promising strategy for combating complex diseases by improving the efficacy and reducing corresponding side effects. Currently, a widely studied problem in pharmacology is to predict effective drug combinations, either through empirically screening in clinic or pure experimental trials. However, the large-scale prediction of drug combination by a systems method is rarely considered. We report a systems pharmacology framework to predict drug combinations (PreDCs) on a computational model, termed probability ensemble approach (PEA), for analysis of both the efficacy and adverse effects of drug combinations. First, a Bayesian network integrating with a similarity algorithm is developed to model the combinations from drug molecular and pharmacological phenotypes, and the predictions are then assessed with both clinical efficacy and adverse effects. It is illustrated that PEA can predict the combination efficacy of drugs spanning different therapeutic classes with high specificity and sensitivity (AUC = 0.90), which was further validated by independent data or new experimental assays. PEA also evaluates the adverse effects (AUC = 0.95) quantitatively and detects the therapeutic indications for drug combinations. Finally, the PreDC database includes 1571 known and 3269 predicted optimal combinations as well as their potential side effects and therapeutic indications. The PreDC database is available at © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail:

  2. Interface-turbulence interactions in large-scale bubbling processes

    Energy Technology Data Exchange (ETDEWEB)

    Liovic, Petar [Institute of Energy Technology, ETH Zurich, and ASCOMP GmbH, Technoparkstrasse 1, CH-8005 Zurich (Switzerland); Lakehal, Djamel [Institute of Energy Technology, ETH Zurich, and ASCOMP GmbH, Technoparkstrasse 1, CH-8005 Zurich (Switzerland)]. E-mail:


    A novel large-eddy simulation (LES) approach for computation of incompressible multi-fluid flows is presented and applied to a turbulent bubbling process driven by the downward injection of air into a water pool at Re {sub pipe} {approx} 17,000. Turbulence is found to assume its highest intensity in the bulk of the gas flow, and to decay as the interface of the growing bubble is approached. Shear flow prevails in the area of jetting from the pipe, buoyancy-driven flow prevails away from the jetting region, and a third region of vigorous bubble break-up lay O(10 )-O(10{sup 1}) pipe diameters above the tip. Cascading of turbulent kinetic energy is accompanied by an instability-induced linear cascading of interface length scales (i.e. azimuthal modes), transferring energy from the most unstable mode to the smallest interface deformation scales. The LES shows the out-scatter of energy from the large-scale gas-side vortices down to interface wrinkling scales, and statistics prove the existence of a strong correlation between turbulence and interface deformations. Surface curvature was found to constitute a source of small-scale vorticity, and therefore of dissipation of turbulent kinetic energy.

  3. Efficient Topology Estimation for Large Scale Optical Mapping

    CERN Document Server

    Elibol, Armagan; Garcia, Rafael


    Large scale optical mapping methods are in great demand among scientists who study different aspects of the seabed, and have been fostered by impressive advances in the capabilities of underwater robots in gathering optical data from the seafloor. Cost and weight constraints mean that low-cost ROVs usually have a very limited number of sensors. When a low-cost robot carries out a seafloor survey using a down-looking camera, it usually follows a predefined trajectory that provides several non time-consecutive overlapping image pairs. Finding these pairs (a process known as topology estimation) is indispensable to obtaining globally consistent mosaics and accurate trajectory estimates, which are necessary for a global view of the surveyed area, especially when optical sensors are the only data source. This book contributes to the state-of-art in large area image mosaicing methods for underwater surveys using low-cost vehicles equipped with a very limited sensor suite. The main focus has been on global alignment...

  4. Toward a quantitative description of large-scale neocortical dynamic function and EEG. (United States)

    Nunez, P L


    A general conceptual framework for large-scale neocortical dynamics based on data from many laboratories is applied to a variety of experimental designs, spatial scales, and brain states. Partly distinct, but interacting local processes (e.g., neural networks) arise from functional segregation. Global processes arise from functional integration and can facilitate (top down) synchronous activity in remote cell groups that function simultaneously at several different spatial scales. Simultaneous local processes may help drive (bottom up) macroscopic global dynamics observed with electroencephalography (EEG) or magnetoencephalography (MEG). A local/global dynamic theory that is consistent with EEG data and the proposed conceptual framework is outlined. This theory is neutral about properties of neural networks embedded in macroscopic fields, but its global component makes several qualitative and semiquantitative predictions about EEG measures of traveling and standing wave phenomena. A more general "metatheory" suggests what large-scale quantitative theories of neocortical dynamics may be like when more accurate treatment of local and nonlinear effects is achieved. The theory describes the dynamics of excitatory and inhibitory synaptic action fields. EEG and MEG provide large-scale estimates of modulation of these synaptic fields around background levels. Brain states are determined by neuromodulatory control parameters. Purely local states are dominated by local feedback gains and rise and decay times of postsynaptic potentials. Dominant local frequencies vary with brain region. Other states are purely global, with moderate to high coherence over large distances. Multiple global mode frequencies arise from a combination of delays in corticocortical axons and neocortical boundary conditions. Global frequencies are identical in all cortical regions, but most states involve dynamic interactions between local networks and the global system. EEG frequencies may involve a

  5. Large-Scale Brain Networks Supporting Divided Attention across Spatial Locations and Sensory Modalities

    Directory of Open Access Journals (Sweden)

    Valerio Santangelo


    Full Text Available Higher-order cognitive processes were shown to rely on the interplay between large-scale neural networks. However, brain networks involved with the capability to split attentional resource over multiple spatial locations and multiple stimuli or sensory modalities have been largely unexplored to date. Here I re-analyzed data from Santangelo et al. (2010 to explore the causal interactions between large-scale brain networks during divided attention. During fMRI scanning, participants monitored streams of visual and/or auditory stimuli in one or two spatial locations for detection of occasional targets. This design allowed comparing a condition in which participants monitored one stimulus/modality (either visual or auditory in two spatial locations vs. a condition in which participants monitored two stimuli/modalities (both visual and auditory in one spatial location. The analysis of the independent components (ICs revealed that dividing attentional resources across two spatial locations necessitated a brain network involving the left ventro- and dorso-lateral prefrontal cortex plus the posterior parietal cortex, including the intraparietal sulcus (IPS and the angular gyrus, bilaterally. The analysis of Granger causality highlighted that the activity of lateral prefrontal regions were predictive of the activity of all of the posteriors parietal nodes. By contrast, dividing attention across two sensory modalities necessitated a brain network including nodes belonging to the dorsal frontoparietal network, i.e., the bilateral frontal eye-fields (FEF and IPS, plus nodes belonging to the salience network, i.e., the anterior cingulated cortex and the left and right anterior insular cortex (aIC. The analysis of Granger causality highlights a tight interdependence between the dorsal frontoparietal and salience nodes in trials requiring divided attention between different sensory modalities. The current findings therefore highlighted a dissociation among

  6. The role of break-induced replication in large-scale expansions of (CAG)n/(CTG)nrepeats. (United States)

    Kim, Jane C; Harris, Samantha T; Dinter, Teresa; Shah, Kartik A; Mirkin, Sergei M


    Expansions of (CAG) n /(CTG) n trinucleotide repeats are responsible for over a dozen neuromuscular and neurodegenerative disorders. Large-scale expansions are commonly observed in human pedigrees and may be explained by iterative small-scale events such as strand slippage during replication or repair DNA synthesis. Alternatively, a distinct mechanism may lead to a large-scale repeat expansion as a single step. To distinguish between these possibilities, we developed a novel experimental system specifically tuned to analyze large-scale expansions of (CAG) n /(CTG) n repeats in Saccharomyces cerevisiae. The median size of repeat expansions was ∼60 triplets, although we also observed additions of more than 150 triplets. Genetic analysis revealed that Rad51, Rad52, Mre11, Pol32, Pif1, and Mus81 and/or Yen1 proteins are required for large-scale expansions, whereas proteins previously implicated in small-scale expansions are not involved. From these results, we propose a new model for large-scale expansions, which is based on the recovery of replication forks broken at (CAG) n /(CTG) n repeats via break-induced replication.

  7. Large-scale CO2 storage — Is it feasible?

    Directory of Open Access Journals (Sweden)

    Johansen H.


    Full Text Available CCS is generally estimated to have to account for about 20% of the reduction of CO2 emissions to the atmosphere. This paper focuses on the technical aspects of CO2 storage, even if the CCS challenge is equally dependent upon finding viable international solutions to a wide range of economic, political and cultural issues. It has already been demonstrated that it is technically possible to store adequate amounts of CO2 in the subsurface (Sleipner, InSalah, Snøhvit. The large-scale storage challenge (several Gigatons of CO2 per year is more an issue of minimizing cost without compromising safety, and of making international regulations.The storage challenge may be split into 4 main parts: 1 finding reservoirs with adequate storage capacity, 2 make sure that the sealing capacity above the reservoir is sufficient, 3 build the infrastructure for transport, drilling and injection, and 4 set up and perform the necessary monitoring activities. More than 150 years of worldwide experience from the production of oil and gas is an important source of competence for CO2 storage. The storage challenge is however different in three important aspects: 1 the storage activity results in pressure increase in the subsurface, 2 there is no production of fluids that give important feedback on reservoir performance, and 3 the monitoring requirement will have to extend for a much longer time into the future than what is needed during oil and gas production. An important property of CO2 is that its behaviour in the subsurface is significantly different from that of oil and gas. CO2 in contact with water is reactive and corrosive, and may impose great damage on both man-made and natural materials, if proper precautions are not executed. On the other hand, the long-term effect of most of these reactions is that a large amount of CO2 will become immobilized and permanently stored as solid carbonate minerals. The reduced opportunity for direct monitoring of fluid samples

  8. Large-scale CO2 storage — Is it feasible? (United States)

    Johansen, H.


    CCS is generally estimated to have to account for about 20% of the reduction of CO2 emissions to the atmosphere. This paper focuses on the technical aspects of CO2 storage, even if the CCS challenge is equally dependent upon finding viable international solutions to a wide range of economic, political and cultural issues. It has already been demonstrated that it is technically possible to store adequate amounts of CO2 in the subsurface (Sleipner, InSalah, Snøhvit). The large-scale storage challenge (several Gigatons of CO2 per year) is more an issue of minimizing cost without compromising safety, and of making international regulations.The storage challenge may be split into 4 main parts: 1) finding reservoirs with adequate storage capacity, 2) make sure that the sealing capacity above the reservoir is sufficient, 3) build the infrastructure for transport, drilling and injection, and 4) set up and perform the necessary monitoring activities. More than 150 years of worldwide experience from the production of oil and gas is an important source of competence for CO2 storage. The storage challenge is however different in three important aspects: 1) the storage activity results in pressure increase in the subsurface, 2) there is no production of fluids that give important feedback on reservoir performance, and 3) the monitoring requirement will have to extend for a much longer time into the future than what is needed during oil and gas production. An important property of CO2 is that its behaviour in the subsurface is significantly different from that of oil and gas. CO2 in contact with water is reactive and corrosive, and may impose great damage on both man-made and natural materials, if proper precautions are not executed. On the other hand, the long-term effect of most of these reactions is that a large amount of CO2 will become immobilized and permanently stored as solid carbonate minerals. The reduced opportunity for direct monitoring of fluid samples close to the

  9. Unified Access Architecture for Large-Scale Scientific Datasets (United States)

    Karna, Risav


    Data-intensive sciences have to deploy diverse large scale database technologies for data analytics as scientists have now been dealing with much larger volume than ever before. While array databases have bridged many gaps between the needs of data-intensive research fields and DBMS technologies (Zhang 2011), invocation of other big data tools accompanying these databases is still manual and separate the database management's interface. We identify this as an architectural challenge that will increasingly complicate the user's work flow owing to the growing number of useful but isolated and niche database tools. Such use of data analysis tools in effect leaves the burden on the user's end to synchronize the results from other data manipulation analysis tools with the database management system. To this end, we propose a unified access interface for using big data tools within large scale scientific array database using the database queries themselves to embed foreign routines belonging to the big data tools. Such an invocation of foreign data manipulation routines inside a query into a database can be made possible through a user-defined function (UDF). UDFs that allow such levels of freedom as to call modules from another language and interface back and forth between the query body and the side-loaded functions would be needed for this purpose. For the purpose of this research we attempt coupling of four widely used tools Hadoop (hadoop1), Matlab (matlab1), R (r1) and ScaLAPACK (scalapack1) with UDF feature of rasdaman (Baumann 98), an array-based data manager, for investigating this concept. The native array data model used by an array-based data manager provides compact data storage and high performance operations on ordered data such as spatial data, temporal data, and matrix-based data for linear algebra operations (scidbusr1). Performances issues arising due to coupling of tools with different paradigms, niche functionalities, separate processes and output

  10. North Atlantic explosive cyclones and large scale atmospheric variability modes (United States)

    Liberato, Margarida L. R.


    Extreme windstorms are one of the major natural catastrophes in the extratropics, one of the most costly natural hazards in Europe and are responsible for substantial economic damages and even fatalities. During the last decades Europe witnessed major damage from winter storms such as Lothar (December 1999), Kyrill (January 2007), Klaus (January 2009), Xynthia (February 2010), Gong (January 2013) and Stephanie (February 2014) which exhibited uncommon characteristics. In fact, most of these storms crossed the Atlantic in direction of Europe experiencing an explosive development at unusual lower latitudes along the edge of the dominant North Atlantic storm track and reaching Iberia with an uncommon intensity (Liberato et al., 2011; 2013; Liberato 2014). Results show that the explosive cyclogenesis process of most of these storms at such low latitudes is driven by: (i) the southerly displacement of a very strong polar jet stream; and (ii) the presence of an atmospheric river (AR), that is, by a (sub)tropical moisture export over the western and central (sub)tropical Atlantic which converges into the cyclogenesis region and then moves along with the storm towards Iberia. Previous studies have pointed to a link between the North Atlantic Oscillation (NAO) and intense European windstorms. On the other hand, the NAO exerts a decisive control on the average latitudinal location of the jet stream over the North Atlantic basin (Woollings et al. 2010). In this work the link between North Atlantic explosive cyclogenesis, atmospheric rivers and large scale atmospheric variability modes is reviewed and discussed. Liberato MLR (2014) The 19 January 2013 windstorm over the north Atlantic: Large-scale dynamics and impacts on Iberia. Weather and Climate Extremes, 5-6, 16-28. doi: 10.1016/j.wace.2014.06.002 Liberato MRL, Pinto JG, Trigo IF, Trigo RM. (2011) Klaus - an exceptional winter storm over Northern Iberia and Southern France. Weather 66:330-334. doi:10.1002/wea.755 Liberato

  11. Talking About The Smokes: a large-scale, community-based participatory research project. (United States)

    Couzos, Sophia; Nicholson, Anna K; Hunt, Jennifer M; Davey, Maureen E; May, Josephine K; Bennet, Pele T; Westphal, Darren W; Thomas, David P


    To describe the Talking About The Smokes (TATS) project according to the World Health Organization guiding principles for conducting community-based participatory research (PR) involving indigenous peoples, to assist others planning large-scale PR projects. The TATS project was initiated in Australia in 2010 as part of the International Tobacco Control Policy Evaluation Project, and surveyed a representative sample of 2522 Aboriginal and Torres Strait Islander adults to assess the impact of tobacco control policies. The PR process of the TATS project, which aimed to build partnerships to create equitable conditions for knowledge production, was mapped and summarised onto a framework adapted from the WHO principles. Processes describing consultation and approval, partnerships and research agreements, communication, funding, ethics and consent, data and benefits of the research. The TATS project involved baseline and follow-up surveys conducted in 34 Aboriginal community-controlled health services and one Torres Strait community. Consistent with the WHO PR principles, the TATS project built on community priorities and strengths through strategic partnerships from project inception, and demonstrated the value of research agreements and trusting relationships to foster shared decision making, capacity building and a commitment to Indigenous data ownership. Community-based PR methodology, by definition, needs adaptation to local settings and priorities. The TATS project demonstrates that large-scale research can be participatory, with strong Indigenous community engagement and benefits.

  12. Large-Scale Extratropical Weather Systems in Mars' Atmosphere (United States)

    Hollingsworth, Jeffery L.


    During late autumn through early spring, extratropical regions on Mars exhibit profound mean zonal equator-to-pole thermal contrasts. The imposition of this strong meridional temperature variation supports intense eastward-traveling, synoptic weather systems (i.e., transient baroclinic/barotropic waves) within Mars' extratropical atmosphere. Such disturbances grow, mature and decay within the east-west varying seasonal-mean midlatitude jet stream (i.e., the polar vortex) on the planet. Near the surface, the weather disturbances indicated large-scale spiraling "comma"-shaped dust cloud structures and scimitar-shaped dust fronts, indicative of processes associated with cyclo-/fronto-genesis. The weather systems occur during specific seasons on Mars, and in both hemispheres. The northern hemisphere (NH) disturbances are significantly more intense than their counterparts in the southern hemisphere (SH). Further, the NH weather systems and accompanying frontal waves appear to have significant impacts on the transport of tracer fields (e.g., particularly dust and to some extent water species (vapor/ice) as well). And regarding dust, frontal waves appear to be key agents in the lifting, lofting, organization and transport of this particular atmospheric aerosol. In this paper, a brief background and supporting observations of Mars' extratropical weather systems is presented. This is followed by a short review of the theory and various modeling studies (i.e., ranging from highly simplified, mechanistic and full global circulation modeling investigations) which have been pursued. Finally, a discussion of outstanding issues and questions regarding the character and nature of Mars' extratropical traveling weather systems is offered.

  13. CLAST: CUDA implemented large-scale alignment search tool. (United States)

    Yano, Masahiro; Mori, Hiroshi; Akiyama, Yutaka; Yamada, Takuji; Kurokawa, Ken


    Metagenomics is a powerful methodology to study microbial communities, but it is highly dependent on nucleotide sequence similarity searching against sequence databases. Metagenomic analyses with next-generation sequencing technologies produce enormous numbers of reads from microbial communities, and many reads are derived from microbes whose genomes have not yet been sequenced, limiting the usefulness of existing sequence similarity search tools. Therefore, there is a clear need for a sequence similarity search tool that can rapidly detect weak similarity in large datasets. We developed a tool, which we named CLAST (CUDA implemented large-scale alignment search tool), that enables analyses of millions of reads and thousands of reference genome sequences, and runs on NVIDIA Fermi architecture graphics processing units. CLAST has four main advantages over existing alignment tools. First, CLAST was capable of identifying sequence similarities ~80.8 times faster than BLAST and 9.6 times faster than BLAT. Second, CLAST executes global alignment as the default (local alignment is also an option), enabling CLAST to assign reads to taxonomic and functional groups based on evolutionarily distant nucleotide sequences with high accuracy. Third, CLAST does not need a preprocessed sequence database like Burrows-Wheeler Transform-based tools, and this enables CLAST to incorporate large, frequently updated sequence databases. Fourth, CLAST requires <2 GB of main memory, making it possible to run CLAST on a standard desktop computer or server node. CLAST achieved very high speed (similar to the Burrows-Wheeler Transform-based Bowtie 2 for long reads) and sensitivity (equal to BLAST, BLAT, and FR-HIT) without the need for extensive database preprocessing or a specialized computing platform. Our results demonstrate that CLAST has the potential to be one of the most powerful and realistic approaches to analyze the massive amount of sequence data from next-generation sequencing

  14. Interpreting large-scale redshift-space distortion measurements (United States)

    Samushia, L.; Percival, W. J.; Raccanelli, A.


    The simplest theory describing large-scale redshift-space distortions (RSD), based on linear theory and distant galaxies, depends on the growth of cosmological structure, suggesting that strong tests of general relativity can be constructed from galaxy surveys. As data sets become larger and the expected constraints more precise, the extent to which the RSD follow the simple theory needs to be assessed in order that we do not introduce systematic errors into the tests by introducing inaccurate simplifying assumptions. We study the impact of the sample geometry, non-linear processes and biases induced by our lack of understanding of the radial galaxy distribution on RSD measurements. Using Large Suite of Dark Matter Simulations of the Sloan Digital Sky Survey II (SDSS-II) luminous red galaxy data, these effects are shown to be important at the level of 20 per cent. Including them, we can accurately model the recovered clustering in these mock catalogues on scales 30-200 h-1 Mpc. Applying this analysis to robustly measure parameters describing the growth history of the Universe from the SDSS-II data gives f(z= 0.25)σ8(z= 0.25) = 0.3512 ± 0.0583 and f(z= 0.37)σ8(z= 0.37) = 0.4602 ± 0.0378 when no prior is imposed on the growth rate, and the background geometry is assumed to follow a Λ cold dark matter (ΛCDM) model with the Wilkinson Microwave Anisotropy Probe (WMAP)+Type Ia supernova priors. The standard WMAP constrained ΛCDM model with general relativity predicts f(z= 0.25)σ8(z= 0.25) = 0.4260 ± 0.0141 and f(z= 0.37)σ8(z= 0.37) = 0.4367 ± 0.0136, which is fully consistent with these measurements.

  15. BFAST: an alignment tool for large scale genome resequencing.

    Directory of Open Access Journals (Sweden)

    Nils Homer


    Full Text Available The new generation of massively parallel DNA sequencers, combined with the challenge of whole human genome resequencing, result in the need for rapid and accurate alignment of billions of short DNA sequence reads to a large reference genome. Speed is obviously of great importance, but equally important is maintaining alignment accuracy of short reads, in the 25-100 base range, in the presence of errors and true biological variation.We introduce a new algorithm specifically optimized for this task, as well as a freely available implementation, BFAST, which can align data produced by any of current sequencing platforms, allows for user-customizable levels of speed and accuracy, supports paired end data, and provides for efficient parallel and multi-threaded computation on a computer cluster. The new method is based on creating flexible, efficient whole genome indexes to rapidly map reads to candidate alignment locations, with arbitrary multiple independent indexes allowed to achieve robustness against read errors and sequence variants. The final local alignment uses a Smith-Waterman method, with gaps to support the detection of small indels.We compare BFAST to a selection of large-scale alignment tools -- BLAT, MAQ, SHRiMP, and SOAP -- in terms of both speed and accuracy, using simulated and real-world datasets. We show BFAST can achieve substantially greater sensitivity of alignment in the context of errors and true variants, especially insertions and deletions, and minimize false mappings, while maintaining adequate speed compared to other current methods. We show BFAST can align the amount of data needed to fully resequence a human genome, one billion reads, with high sensitivity and accuracy, on a modest computer cluster in less than 24 hours. BFAST is available at (

  16. Large-scale HTS bulks for magnetic application

    Energy Technology Data Exchange (ETDEWEB)

    Werfel, Frank N., E-mail: [Adelwitz Technologiezentrum GmbH (ATZ), Rittergut Adelwitz 16, 04886 Arzberg-Adelwitz (Germany); Floegel-Delor, Uta; Riedel, Thomas; Goebel, Bernd; Rothfeld, Rolf; Schirrmeister, Peter; Wippich, Dieter [Adelwitz Technologiezentrum GmbH (ATZ), Rittergut Adelwitz 16, 04886 Arzberg-Adelwitz (Germany)


    Highlights: ► ATZ Company has constructed about 130 HTS magnet systems. ► Multi-seeded YBCO bulks joint the way for large-scale application. ► Levitation platforms demonstrate “superconductivity” to a great public audience (100 years anniversary). ► HTS magnetic bearings show forces up to 1 t. ► Modular HTS maglev vacuum cryostats are tested for train demonstrators in Brazil, China and Germany. -- Abstract: ATZ Company has constructed about 130 HTS magnet systems using high-Tc bulk magnets. A key feature in scaling-up is the fabrication of YBCO melts textured multi-seeded large bulks with three to eight seeds. Except of levitation, magnetization, trapped field and hysteresis, we review system engineering parameters of HTS magnetic linear and rotational bearings like compactness, cryogenics, power density, efficiency and robust construction. We examine mobile compact YBCO bulk magnet platforms cooled with LN{sub 2} and Stirling cryo-cooler for demonstrator use. Compact cryostats for Maglev train operation contain 24 pieces of 3-seed bulks and can levitate 2500–3000 N at 10 mm above a permanent magnet (PM) track. The effective magnetic distance of the thermally insulated bulks is 2 mm only; the stored 2.5 l LN{sub 2} allows more than 24 h operation without refilling. 34 HTS Maglev vacuum cryostats are manufactured tested and operate in Germany, China and Brazil. The magnetic levitation load to weight ratio is more than 15, and by group assembling the HTS cryostats under vehicles up to 5 t total loads levitated above a magnetic track is achieved.

  17. Large-scale Contextual Effects in Early Human Visual Cortex

    Directory of Open Access Journals (Sweden)

    Sung Jun Joo


    Full Text Available A commonly held view about neurons in early visual cortex is that they serve as localized feature detectors. Here, however, we demonstrate that the responses of neurons in early visual cortex are sensitive to global visual patterns. Using multiple methodologies–psychophysics, fMRI, and EEG–we measured neural responses to an oriented Gabor (“target” embedded in various orientation patterns. Specifically, we varied whether a central target deviated from its context by changing distant orientations while leaving the immediately neighboring flankers unchanged. The results of psychophysical contrast adaptation and fMRI experiments show that a target that deviates from its context results in more neural activity compared to a target that is grouped into an alternating pattern. For example, the neural response to a vertically oriented target was greater when it deviated from the orientation of flankers (HHVHH compared to when it was grouped into an alternating pattern (VHVHV. We then found that this pattern-sensitive response manifests in the earliest sensory component of the event-related potential to the target. Finally, in a forced-choice classification task of “noise” stimuli, perceptions are biased to “see” an orientation that deviates from its context. Our results show that neurons in early visual cortex are sensitive to large-scale global patterns in images in a way that is more sophisticated than localized feature detection. Our results showing a reduced neural response to statistical redundancies in images is not only optimal from an information theory perspective but also takes into account known energy constraints in neural processing.

  18. Local Large-Scale Structure and the Assumption of Homogeneity (United States)

    Keenan, Ryan C.; Barger, Amy J.; Cowie, Lennox L.


    Our recent estimates of galaxy counts and the luminosity density in the near-infrared (Keenan et al. 2010, 2012) indicated that the local universe may be under-dense on radial scales of several hundred megaparsecs. Such a large-scale local under-density could introduce significant biases in the measurement and interpretation of cosmological observables, such as the inferred effects of dark energy on the rate of expansion. In Keenan et al. (2013), we measured the K-band luminosity density as a function of distance from us to test for such a local under-density. We made this measurement over the redshift range 0.01 0.07, we measure an increasing luminosity density that by z ~ 0.1 rises to a value of ~ 1.5 times higher than that measured locally. This implies that the stellar mass density follows a similar trend. Assuming that the underlying dark matter distribution is traced by this luminous matter, this suggests that the local mass density may be lower than the global mass density of the universe at an amplitude and on a scale that is sufficient to introduce significant biases into the measurement of basic cosmological observables. At least one study has shown that an under-density of roughly this amplitude and scale could resolve the apparent tension between direct local measurements of the Hubble constant and those inferred by Planck team. Other theoretical studies have concluded that such an under-density could account for what looks like an accelerating expansion, even when no dark energy is present.

  19. Cosmological parameters from large scale structure - geometric versus shape information

    Energy Technology Data Exchange (ETDEWEB)

    Hamann, Jan; Hannestad, Steen [Department of Physics and Astronomy, University of Aarhus, DK-8000 Århus C (Denmark); Lesgourgues, Julien [CERN, Theory Division, CH-1211 Geneva 23 (Switzerland); Rampf, Cornelius; Wong, Yvonne Y.Y., E-mail:, E-mail:, E-mail:, E-mail:, E-mail: [Institut für Theoretische Teilchenphysik und Kosmologie, RWTH Aachen, D-52056 Aachen (Germany)


    The matter power spectrum as derived from large scale structure (LSS) surveys contains two important and distinct pieces of information: an overall smooth shape and the imprint of baryon acoustic oscillations (BAO). We investigate the separate impact of these two types of information on cosmological parameter estimation for current data, and show that for the simplest cosmological models, the broad-band shape information currently contained in the SDSS DR7 halo power spectrum (HPS) is by far superseded by geometric information derived from the baryonic features. An immediate corollary is that contrary to popular beliefs, the upper limit on the neutrino mass m{sub ν} presently derived from LSS combined with cosmic microwave background (CMB) data does not in fact arise from the possible small-scale power suppression due to neutrino free-streaming, if we limit the model framework to minimal ΛCDM+m{sub ν}. However, in more complicated models, such as those extended with extra light degrees of freedom and a dark energy equation of state parameter w differing from -1, shape information becomes crucial for the resolution of parameter degeneracies. This conclusion will remain true even when data from the Planck spacecraft are combined with SDSS DR7 data. In the course of our analysis, we update both the BAO likelihood function by including an exact numerical calculation of the time of decoupling, as well as the HPS likelihood, by introducing a new dewiggling procedure that generalises the previous approach to models with an arbitrary sound horizon at decoupling. These changes allow a consistent application of the BAO and HPS data sets to a much wider class of models, including the ones considered in this work. All the cases considered here are compatible with the conservative 95%-bounds Σm{sub ν} < 1.16eV, N{sub eff} = 4.8±2.0.

  20. Wireless gigabit data telemetry for large-scale neural recording. (United States)

    Kuan, Yen-Cheng; Lo, Yi-Kai; Kim, Yanghyo; Chang, Mau-Chung Frank; Liu, Wentai


    Implantable wireless neural recording from a large ensemble of simultaneously acting neurons is a critical component to thoroughly investigate neural interactions and brain dynamics from freely moving animals. Recent researches have shown the feasibility of simultaneously recording from hundreds of neurons and suggested that the ability of recording a larger number of neurons results in better signal quality. This massive recording inevitably demands a large amount of data transfer. For example, recording 2000 neurons while keeping the signal fidelity ( > 12 bit, > 40 KS/s per neuron) needs approximately a 1-Gb/s data link. Designing a wireless data telemetry system to support such (or higher) data rate while aiming to lower the power consumption of an implantable device imposes a grand challenge on neuroscience community. In this paper, we present a wireless gigabit data telemetry for future large-scale neural recording interface. This telemetry comprises of a pair of low-power gigabit transmitter and receiver operating at 60 GHz, and establishes a short-distance wireless link to transfer the massive amount of neural signals outward from the implanted device. The transmission distance of the received neural signal can be further extended by an externally rendezvous wireless transceiver, which is less power/heat-constraint since it is not at the immediate proximity of the cortex and its radiated signal is not seriously attenuated by the lossy tissue. The gigabit data link has been demonstrated to achieve a high data rate of 6 Gb/s with a bit-error-rate of 10(-12) at a transmission distance of 6 mm, an applicable separation between transmitter and receiver. This high data rate is able to support thousands of recording channels while ensuring a low energy cost per bit of 2.08 pJ/b.

  1. Dominant modes of variability in large-scale Birkeland currents (United States)

    Cousins, E. D. P.; Matsuo, Tomoko; Richmond, A. D.; Anderson, B. J.


    Properties of variability in large-scale Birkeland currents are investigated through empirical orthogonal function (EOF) analysis of 1 week of data from the Active Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE). Mean distributions and dominant modes of variability are identified for both the Northern and Southern Hemispheres. Differences in the results from the two hemispheres are observed, which are attributed to seasonal differences in conductivity (the study period occurred near solstice). A universal mean and set of dominant modes of variability are obtained through combining the hemispheric results, and it is found that the mean and first three modes of variability (EOFs) account for 38% of the total observed squared magnetic perturbations (δB2) from both hemispheres. The mean distribution represents a standard Region 1/Region 2 (R1/R2) morphology of currents and EOF 1 captures the strengthening/weakening of the average distribution and is well correlated with the north-south component of the interplanetary magnetic field (IMF). EOF 2 captures a mixture of effects including the expansion/contraction and rotation of the (R1/R2) currents; this mode correlates only weakly with possible external driving parameters. EOF 3 captures changes in the morphology of the currents in the dayside cusp region and is well correlated with the dawn-dusk component of the IMF. The higher-order EOFs capture more complex, smaller-scale variations in the Birkeland currents and appear generally uncorrelated with external driving parameters. The results of the EOF analysis described here are used for describing error covariance in a data assimilation procedure utilizing AMPERE data, as described in a companion paper.

  2. Large-scale solar magnetic field mapping: I. (United States)

    Schatten, Kenneth H


    This article focuses on mapping the Sun's large-scale magnetic fields. In particular, the model considers how photospheric fields evolve in time. Our solar field mapping method uses Netlogo's cellular automata software via algorithms to carry out the movement of magnetic field on the Sun via agents. This model's entities consist of two breeds: blue and red agents. The former carry a fixed amount of radially outward magnetic flux: 10(23) Mx, and the latter, the identical amount of inward directed flux. The individual agents are distinguished, for clarity, by various shades of blue and red arrows whose orientation indicates the direction the agents are moving, relative to the near-steady bulk fluid motions. The fluid motions generally advect the field with the well known meridional circulation and differential rotation. Our model predominantly focuses on spatial and temporal variations from the bulk fluid motions owing to magnetic interactions. There are but a few effects that agents have on each other: i) while at the poles, field agents are connected via the Babcock - Leighton (B - L) subsurface field to other latitudes. This allows them to undertake two duties there: A) the B - L subsurface field spawns the next generation of new magnetic field via new agents, and B) the B - L subsurface field attracts lower-latitude fields via the "long-range" magnetic stress tension; ii) nearby agents affect each other's motion by short-range interactions; and iii) through annihilation: when opposite field agents get too close to each other, they disappear in pairs. The behavior of the agents' long- and short-range magnetic forces is discussed within this paper as well as the model's use of internal boundary conditions. The workings of the model may be seen in the accompanying movies and/or by using the software available via SpringerPlus' website, or on the Netlogo (TM) community website, where help is readable available, and should all these fail, some help from the author.

  3. A first large-scale flood inundation forecasting model (United States)

    Schumann, G. J.-P.; Neal, J. C.; Voisin, N.; Andreadis, K. M.; Pappenberger, F.; Phanthuwongpakdee, N.; Hall, A. C.; Bates, P. D.


    At present continental to global scale flood forecasting predicts at a point discharge, with little attention to detail and accuracy of local scale inundation predictions. Yet, inundation variables are of interest and all flood impacts are inherently local in nature. This paper proposes a large-scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas. The model was built for the Lower Zambezi River to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. ECMWF ensemble forecast (ENS) data were used to force the VIC (Variable Infiltration Capacity) hydrologic model, which simulated and routed daily flows to the input boundary locations of a 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of channels that play a key role in flood wave propagation. We therefore employed a novel subgrid channel scheme to describe the river network in detail while representing the floodplain at an appropriate scale. The modeling system was calibrated using channel water levels from satellite laser altimetry and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of between one and two model resolutions compared to an observed flood edge and inundation area agreement was on average 86%. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2.

  4. Topological analysis of large-scale biomedical terminology structures. (United States)

    Bales, Michael E; Lussier, Yves A; Johnson, Stephen B


    To characterize global structural features of large-scale biomedical terminologies using currently emerging statistical approaches. Given rapid growth of terminologies, this research was designed to address scalability. We selected 16 terminologies covering a variety of domains from the UMLS Metathesaurus, a collection of terminological systems. Each was modeled as a network in which nodes were atomic concepts and links were relationships asserted by the source vocabulary. For comparison against each terminology we created three random networks of equivalent size and density. Average node degree, node degree distribution, clustering coefficient, average path length. Eight of 16 terminologies exhibited the small-world characteristics of a short average path length and strong local clustering. An overlapping subset of nine exhibited a power law distribution in node degrees, indicative of a scale-free architecture. We attribute these features to specific design constraints. Constraints on node connectivity, common in more synthetic classification systems, localize the effects of changes and deletions. In contrast, small-world and scale-free features, common in comprehensive medical terminologies, promote flexible navigation and less restrictive organic-like growth. While thought of as synthetic, grid-like structures, some controlled terminologies are structurally indistinguishable from natural language networks. This paradoxical result suggests that terminology structure is shaped not only by formal logic-based semantics, but by rules analogous to those that govern social networks and biological systems. Graph theoretic modeling shows early promise as a framework for describing terminology structure. Deeper understanding of these techniques may inform the development of scalable terminologies and ontologies.

  5. Key Technologies in Large-scale Rescue Robot Wrists

    Directory of Open Access Journals (Sweden)

    Tang Zhidong


    Full Text Available The full-Automatic Quick Hitch Coupling Device (full-AQHCD for short is used as the starting point, key technologies in a large-scale rescue robot wrist, which is constituted by integrating a quick hitch coupling device, a turning device, and a swaying device together, are reviewed respectively. Firstly, the semi-AQHCD made domestically for the main-Arm Claw Wrist (main-ACW for short is introduced, and the full-AQHCD imported from Oil Quick company in Sweden for the vice-Arm Cutter Wrist (vice-ACW for short is presented. Secondly, aiming at three key technologies in the full-AQHCD including rotary joint technology, automatic docking technology and precise docking technology for quick action coupling, are concisely expressed. Thirdly, the hydraulic motor driving gear type slewing bearing technology of the turning device made domestically for the main-ACW is introduced, and the hydraulic motor driving worm type slewing bearing technology of the turning device imported from HKS company in Germany for the vice-ACW is presented, especially, the existing gap in the similar domestic technology is discussed. Subsequently, the hydraulic cylinder driving 4-bar linkage technology of the swaying device made domestically for the main-ACW is introduced, and the hydraulic double spiral swing cylinder technology of the swaying device imported from HKS company in Germany for the vice-ACW is presented, especially, the existing gap in the similar domestic technology is discussed. Finally, it is emphasized that these technological gaps have seriously restricted the ability of the vice-ACW to successfully work in future actual rescue combats, therefore, it must be highly valued in the follow-up research and development (R&D through cooperating with professional manufacturers in China, thereby making technological advances.

  6. Spatiotemporal dynamics of large-scale brain activity (United States)

    Neuman, Jeremy

    Understanding the dynamics of large-scale brain activity is a tough challenge. One reason for this is the presence of an incredible amount of complexity arising from having roughly 100 billion neurons connected via 100 trillion synapses. Because of the extremely high number of degrees of freedom in the nervous system, the question of how the brain manages to properly function and remain stable, yet also be adaptable, must be posed. Neuroscientists have identified many ways the nervous system makes this possible, of which synaptic plasticity is possibly the most notable one. On the other hand, it is vital to understand how the nervous system also loses stability, resulting in neuropathological diseases such as epilepsy, a disease which affects 1% of the population. In the following work, we seek to answer some of these questions from two different perspectives. The first uses mean-field theory applied to neuronal populations, where the variables of interest are the percentages of active excitatory and inhibitory neurons in a network, to consider how the nervous system responds to external stimuli, self-organizes and generates epileptiform activity. The second method uses statistical field theory, in the framework of single neurons on a lattice, to study the concept of criticality, an idea borrowed from physics which posits that in some regime the brain operates in a collectively stable or marginally stable manner. This will be examined in two different neuronal networks with self-organized criticality serving as the overarching theme for the union of both perspectives. One of the biggest problems in neuroscience is the question of to what extent certain details are significant to the functioning of the brain. These details give rise to various spatiotemporal properties that at the smallest of scales explain the interaction of single neurons and synapses and at the largest of scales describe, for example, behaviors and sensations. In what follows, we will shed some

  7. Ion beam analysis techniques applied to large scale pollution studies

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, D.D.; Bailey, G.; Martin, J.; Garton, D.; Noorman, H.; Stelcer, E.; Johnson, P. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)


    Ion Beam Analysis (IBA) techniques are ideally suited to analyse the thousands of filter papers a year that may originate from a large scale aerosol sampling network. They are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. ANSTO in collaboration with the NSW EPA, Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 80,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP was funded by the Energy Research and Development Corporation (ERDC) and commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 {mu}m particle diameter cut off and runs for 24 hours every Sunday and Wednesday using one Gillman 25mm diameter stretched Teflon filter for each day. These filters are ideal targets for ion beam analysis work. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on the 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. Four simultaneous accelerator based IBA techniques are used at ANSTO, to analyse for the following 24 elements: H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. The IBA techniques were proved invaluable in identifying sources of fine particles and their spatial and seasonal variations accross the large area sampled by the ASP network. 3 figs.

  8. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, C; Abdulla, G; Critchlow, T


    Data produced by large scale scientific simulations, experiments, and observations can easily reach tera-bytes in size. The ability to examine data-sets of this magnitude, even in moderate detail, is problematic at best. Generally this scientific data consists of multivariate field quantities with complex inter-variable correlations and spatial-temporal structure. To provide scientists and engineers with the ability to explore and analyze such data sets we are using a twofold approach. First, we model the data with the objective of creating a compressed yet manageable representation. Second, with that compressed representation, we provide the user with the ability to query the resulting approximation to obtain approximate yet sufficient answers; a process called adhoc querying. This paper is concerned with a wavelet modeling technique that seeks to capture the important physical characteristics of the target scientific data. Our approach is driven by the compression, which is necessary for viable throughput, along with the end user requirements from the discovery process. Our work contrasts existing research which applies wavelets to range querying, change detection, and clustering problems by working directly with a decomposition of the data. The difference in this procedures is due primarily to the nature of the data and the requirements of the scientists and engineers. Our approach directly uses the wavelet coefficients of the data to compress as well as query. We will provide some background on the problem, describe how the wavelet decomposition is used to facilitate data compression and how queries are posed on the resulting compressed model. Results of this process will be shown for several problems of interest and we will end with some observations and conclusions about this research.

  9. Media development for large scale Agrobacterium tumefaciens culture. (United States)

    Leth, Ingrid K; McDonald, Karen A


    A chemically defined media was developed for growing Agrobacterium tumefaciens at large scale for commercial production of recombinant proteins by transient expression in plants. Design of experiments was used to identify major and secondary effects of ten media components: sucrose, ammonium sulfate ((NH 4 ) 2 SO 4 ), magnesium sulfate heptahydrate (MgSO 4 *7H 2 O), calcium chloride dihydrate (CaCl 2 *2H 2 O), iron (II) sulfate heptahydrate (FeSO 4 *7H 2 O), manganese (II) sulfate monohydrate (MnSO 4 *H 2 O), zinc sulfate heptahydrate (ZnSO 4 *7H 2 O), sodium chloride (NaCl), potassium chloride (KCl) and a sodium/potassium phosphate buffer (Na 2 HPO 4 /KH 2 PO 4 ). Calcium and zinc were found to have no detectable impact on biomass concentration or transient expression level, and concentrations of the other components that maximized final biomass concentration were determined. The maximum specific growth rate of Agrobacterium strain C58C1 pTFS40 in this media was 0.33 ± 0.01 h -1 and the final biomass concentration after 26 h of batch growth in shake flasks was 2.6 g dry cell weight/L. Transient expression levels of the reporter protein GUS following infiltration of a recombinant Agrobacterium strain C58C1 into N. benthamiana were comparable when the strain was grown in the defined media, Lysogeny Broth (LB) media, or yeast extract-peptone (YEP) media. In LB and YEP media, free amino acid concentration was measured at three points over the course of batch growth of Agrobacterium strain C58C1 pTFS40; results indicated that l-serine and l-asparagine were depleted from the media first, followed by l-alanine and l-glutamic acid. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:1218-1225, 2017. © 2017 American Institute of Chemical Engineers.

  10. Large-scale Health Information Database and Privacy Protection. (United States)

    Yamamoto, Ryuichi


    Japan was once progressive in the digitalization of healthcare fields but unfortunately has fallen behind in terms of the secondary use of data for public interest. There has recently been a trend to establish large-scale health databases in the nation, and a conflict between data use for public interest and privacy protection has surfaced as this trend has progressed. Databases for health insurance claims or for specific health checkups and guidance services were created according to the law that aims to ensure healthcare for the elderly; however, there is no mention in the act about using these databases for public interest in general. Thus, an initiative for such use must proceed carefully and attentively. The PMDA projects that collect a large amount of medical record information from large hospitals and the health database development project that the Ministry of Health, Labour and Welfare (MHLW) is working on will soon begin to operate according to a general consensus; however, the validity of this consensus can be questioned if issues of anonymity arise. The likelihood that researchers conducting a study for public interest would intentionally invade the privacy of their subjects is slim. However, patients could develop a sense of distrust about their data being used since legal requirements are ambiguous. Nevertheless, without using patients' medical records for public interest, progress in medicine will grind to a halt. Proper legislation that is clear for both researchers and patients will therefore be highly desirable. A revision of the Act on the Protection of Personal Information is currently in progress. In reality, however, privacy is not something that laws alone can protect; it will also require guidelines and self-discipline. We now live in an information capitalization age. I will introduce the trends in legal reform regarding healthcare information and discuss some basics to help people properly face the issue of health big data and privacy

  11. Inflationary tensor fossils in large-scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Dimastrogiovanni, Emanuela [School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455 (United States); Fasiello, Matteo [Department of Physics, Case Western Reserve University, Cleveland, OH 44106 (United States); Jeong, Donghui [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States); Kamionkowski, Marc, E-mail:, E-mail:, E-mail:, E-mail: [Department of Physics and Astronomy, 3400 N. Charles St., Johns Hopkins University, Baltimore, MD 21218 (United States)


    Inflation models make specific predictions for a tensor-scalar-scalar three-point correlation, or bispectrum, between one gravitational-wave (tensor) mode and two density-perturbation (scalar) modes. This tensor-scalar-scalar correlation leads to a local power quadrupole, an apparent departure from statistical isotropy in our Universe, as well as characteristic four-point correlations in the current mass distribution in the Universe. So far, the predictions for these observables have been worked out only for single-clock models in which certain consistency conditions between the tensor-scalar-scalar correlation and tensor and scalar power spectra are satisfied. Here we review the requirements on inflation models for these consistency conditions to be satisfied. We then consider several examples of inflation models, such as non-attractor and solid-inflation models, in which these conditions are put to the test. In solid inflation the simplest consistency conditions are already violated whilst in the non-attractor model we find that, contrary to the standard scenario, the tensor-scalar-scalar correlator probes directly relevant model-dependent information. We work out the predictions for observables in these models. For non-attractor inflation we find an apparent local quadrupolar departure from statistical isotropy in large-scale structure but that this power quadrupole decreases very rapidly at smaller scales. The consistency of the CMB quadrupole with statistical isotropy then constrains the distance scale that corresponds to the transition from the non-attractor to attractor phase of inflation to be larger than the currently observable horizon. Solid inflation predicts clustering fossils signatures in the current galaxy distribution that may be large enough to be detectable with forthcoming, and possibly even current, galaxy surveys.

  12. 77 FR 58416 - Large Scale Networking (LSN); Middleware and Grid Interagency Coordination (MAGIC) Team (United States)


    ... Large Scale Networking (LSN); Middleware and Grid Interagency Coordination (MAGIC) Team AGENCY: The Networking and Information Technology Research and Development (NITRD) National Coordination Office (NCO... to the Large Scale Networking (LSN) Coordinating Group (CG). Public Comments: The government seeks...

  13. 78 FR 7464 - Large Scale Networking (LSN)-Middleware And Grid Interagency Coordination (MAGIC) Team (United States)


    ... Large Scale Networking (LSN)--Middleware And Grid Interagency Coordination (MAGIC) Team AGENCY: The Networking and Information Technology Research and Development (NITRD) National Coordination Office (NCO... Team reports to the Large Scale Networking (LSN) Coordinating Group (CG). Public Comments: The...


    Energy Technology Data Exchange (ETDEWEB)

    Michael J. Holmes; Steven A. Benson; Jeffrey S. Thompson


    The Energy & Environmental Research Center (EERC) is conducting a consortium-based effort directed toward resolving the mercury (Hg) control issues facing the lignite industry. Specifically, the EERC team--the EERC, EPRI, URS, ADA-ES, Babcock & Wilcox, the North Dakota Industrial Commission, SaskPower, and the Mercury Task Force, which includes Basin Electric Power Cooperative, Otter Tail Power Company, Great River Energy, Texas Utilities (TXU), Montana-Dakota Utilities Co., Minnkota Power Cooperative, BNI Coal Ltd., Dakota Westmoreland Corporation, and the North American Coal Company--has undertaken a project to significantly and cost-effectively oxidize elemental mercury in lignite combustion gases, followed by capture in a wet scrubber. This approach will be applicable to virtually every lignite utility in the United States and Canada and potentially impact subbituminous utilities. The oxidation process is proven at the pilot-scale and in short-term full-scale tests. Additional optimization is continuing on oxidation technologies, and this project focuses on longer-term full-scale testing. The lignite industry has been proactive in advancing the understanding of and identifying control options for Hg in lignite combustion flue gases. Approximately 1 year ago, the EERC and EPRI began a series of Hg-related discussions with the Mercury Task Force as well as utilities firing Texas and Saskatchewan lignites. This project is one of three being undertaken by the consortium to perform large-scale Hg control technology testing to address the specific needs and challenges to be met in controlling Hg from lignite-fired power plants. This project involves Hg oxidation upstream of a system equipped with an electrostatic precipitator (ESP) followed by wet flue gas desulfurization (FGD). The team involved in conducting the technical aspects of the project includes the EERC, Babcock & Wilcox, URS, and ADA-ES. The host sites include Minnkota Power Cooperative Milton R. Young

  15. A cooperative strategy for parameter estimation in large scale systems biology models. (United States)

    Villaverde, Alejandro F; Egea, Jose A; Banga, Julio R


    Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs ("threads") that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and

  16. A cooperative strategy for parameter estimation in large scale systems biology models

    Directory of Open Access Journals (Sweden)

    Villaverde Alejandro F


    Full Text Available Abstract Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS, is presented. Its key feature is the cooperation between different programs (“threads” that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS. Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here

  17. A Feasibility Study on Operating Large Scale Compressed Air Energy Storage in Porous Formations (United States)

    Wang, B.; Pfeiffer, W. T.; Li, D.; Bauer, S.


    Compressed air energy storage (CAES) in porous formations has been considered as one promising option of large scale energy storage for decades. This study, hereby, aims at analyzing the feasibility of operating large scale CAES in porous formations and evaluating the performance of underground porous gas reservoirs. To address these issues quantitatively, a hypothetic CAES scenario with a typical anticline structure in northern Germany was numerically simulated. Because of the rapid growth in photovoltaics, the period of extraction in a daily cycle was set to the early morning and the late afternoon in order to bypass the massive solar energy production around noon. The gas turbine scenario was defined referring to the specifications of the Huntorf CAES power plant. The numerical simulations involved two stages, i.e. initial fill and cyclic operation, and both were carried out using the Eclipse E300 simulator (Schlumberger). Pressure loss in the gas wells was post analyzed using an analytical solution. The exergy concept was applied to evaluate the potential energy amount stored in the specific porous formation. The simulation results show that porous formations prove to be a feasible solution of large scale CAES. The initial fill with shut-in periods determines the spatial distribution of the gas phase and helps to achieve higher gas saturation around the wells, and thus higher deliverability. The performance evaluation shows that the overall exergy flow of stored compressed air is also determined by the permeability, which directly affects the deliverability of the gas reservoir and thus the number of wells required.

  18. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NARCIS (Netherlands)

    Loon, van A.F.; Huijgevoort, van M.H.J.; Lanen, van H.A.J.


    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological


    Energy Technology Data Exchange (ETDEWEB)

    Dr. Peter Brewer; Dr. James Barry


    We have continued to carry out creative small-scale experiments in the deep ocean to investigate the science underlying questions of possible future large-scale deep-ocean CO{sub 2} sequestration as a means of ameliorating greenhouse gas growth rates in the atmosphere. This project is closely linked to additional research funded by the DoE Office of Science, and to support from the Monterey Bay Aquarium Research Institute. The listing of project achievements here over the past year reflects these combined resources. Within the last project year we have: (1) Published a significant workshop report (58 pages) entitled ''Direct Ocean Sequestration Expert's Workshop'', based upon a meeting held at MBARI in 2001. The report is available both in hard copy, and on the NETL web site. (2) Carried out three major, deep ocean, (3600m) cruises to examine the physical chemistry, and biological consequences, of several liter quantities released on the ocean floor. (3) Carried out two successful short cruises in collaboration with Dr. Izuo Aya and colleagues (NMRI, Osaka, Japan) to examine the fate of cold (-55 C) CO{sub 2} released at relatively shallow ocean depth. (4) Carried out two short cruises in collaboration with Dr. Costas Tsouris, ORNL, to field test an injection nozzle designed to transform liquid CO{sub 2} into a hydrate slurry at {approx}1000m depth. (5) In collaboration with Prof. Jill Pasteris (Washington University) we have successfully accomplished the first field test of a deep ocean laser Raman spectrometer for probing in situ the physical chemistry of the CO{sub 2} system. (6) Submitted the first major paper on biological impacts as determined from our field studies. (7) Submitted a paper on our measurements of the fate of a rising stream of liquid CO{sub 2} droplets to Environmental Science & Technology. (8) Have had accepted for publication in Eos the first brief account of the laser Raman spectrometer success. (9) Have had two

  20. Effects of mud supply on large-scale estuarine morphology (United States)

    Braat, Lisanne; Kleinhans, Maarten; van Kessel, Thijs; Wongsoredjo, Samor; Bergsma, Laura


    Sandy river estuaries have great economic and ecologic values, but a better understanding is required about the effect of mud on large-scale morphodynamics to optimise maintenance strategies. Very few studies actually include sand-mud interaction effects on morphodynamics on decadal and centennial timescales due to model limitations and lack of spatially and temporally dense data of mud in the bed. Here we study effects of cohesive sediment supply on equilibrium estuary shape, bar-channel patterns and dynamics, during formation from idealised initial conditions over a time scale of centuries and millennia. On the basis of related modelling and experimentation of river and delta patterns we hypothesise that mud will settle into mud flats flanking the estuary that resist erosion and thus self-confine and narrow the estuary and reduce braiding index and channel-bar mobility. We applied the process-based numerical model Delft3D in depth-averaged mode starting from idealised convergent estuaries. Mixed sediment was modelled with an active layer and storage module with fluxes predicted by the Partheniades-Krone relations for the cohesive regime, and Engelund-Hansen for the non-cohesive regime depending on the fraction of mud. This was subjected to a range of different mud inputs from the river or from the sea and a range of river discharge and tidal amplitudes. Our modelling results show that mud is predominantly stored in mudflats on the sides of the estuary. Higher mud concentration at the river inflow leads to narrower and shorter estuaries. Channels within the estuary also become narrower due to increased cohesion in the channel banks. This trend is confirmed in preliminary experiments. However, channels do not increase in depth; this is in contrast with what is observed in rivers and we do not yet fully understand this. Migration rates of channels and bars and bar splitting and merging also reduce with increasing mud concentration. For higher discharge channel

  1. Large-scale seismic waveform quality metric calculation using Hadoop (United States)

    Magana-Zook, S.; Gaylord, J. M.; Knapp, D. R.; Dodge, D. A.; Ruppert, S. D.


    In this work we investigated the suitability of Hadoop MapReduce and Apache Spark for large-scale computation of seismic waveform quality metrics by comparing their performance with that of a traditional distributed implementation. The Incorporated Research Institutions for Seismology (IRIS) Data Management Center (DMC) provided 43 terabytes of broadband waveform data of which 5.1 TB of data were processed with the traditional architecture, and the full 43 TB were processed using MapReduce and Spark. Maximum performance of 0.56 terabytes per hour was achieved using all 5 nodes of the traditional implementation. We noted that I/O dominated processing, and that I/O performance was deteriorating with the addition of the 5th node. Data collected from this experiment provided the baseline against which the Hadoop results were compared. Next, we processed the full 43 TB dataset using both MapReduce and Apache Spark on our 18-node Hadoop cluster. These experiments were conducted multiple times with various subsets of the data so that we could build models to predict performance as a function of dataset size. We found that both MapReduce and Spark significantly outperformed the traditional reference implementation. At a dataset size of 5.1 terabytes, both Spark and MapReduce were about 15 times faster than the reference implementation. Furthermore, our performance models predict that for a dataset of 350 terabytes, Spark running on a 100-node cluster would be about 265 times faster than the reference implementation. We do not expect that the reference implementation deployed on a 100-node cluster would perform significantly better than on the 5-node cluster because the I/O performance cannot be made to scale. Finally, we note that although Big Data technologies clearly provide a way to process seismic waveform datasets in a high-performance and scalable manner, the technology is still rapidly changing, requires a high degree of investment in personnel, and will likely

  2. Large Scale Computing and Storage Requirements for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey


    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years

  3. A first large-scale flood inundation forecasting model

    Energy Technology Data Exchange (ETDEWEB)

    Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie; Andreadis, Konstantinos M.; Pappenberger, Florian; Phanthuwongpakdee, Kay; Hall, Amanda C.; Bates, Paul D.


    At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domain has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode

  4. Developmental changes in large-scale network connectivity in autism. (United States)

    Nomi, Jason S; Uddin, Lucina Q


    Disrupted cortical connectivity is thought to underlie the complex cognitive and behavior profile observed in individuals with autism spectrum disorder (ASD). Previous neuroimaging research has identified patterns of both functional hypo- and hyper-connectivity in individuals with ASD. A recent theory attempting to reconcile conflicting results in the literature proposes that hyper-connectivity of brain networks may be more characteristic of young children with ASD, while hypo-connectivity may be more prevalent in adolescents and adults with the disorder when compared to typical development (TD) (Uddin etal., 2013). Previous work has examined only young children, mixed groups of children and adolescents, or adult cohorts in separate studies, leaving open the question of developmental influences on functional brain connectivity in ASD. The current study tests this developmental hypothesis by examining within- and between-network resting state functional connectivity in a large sample of 26 children, 28 adolescents, and 18 adults with ASD and age- and IQ-matchedTD individuals for the first time using an entirely data-driven approach. Independent component analyses (ICA) and dual regression was applied to data from three age cohorts to examine the effects of participant age on patterns of within-networkwhole-brain functional connectivity in individuals with ASD compared with TD individuals. Between-network connectivity differences were examined for each age cohort by comparing correlations between ICA components across groups. We find that in the youngest cohort (age 11 and under), children with ASD exhibit hyper-connectivity within large-scale brain networks as well as decreased between-network connectivity compared with age-matchedTD children. In contrast, adolescents with ASD (age 11-18) do not differ from TD adolescents in within-network connectivity, yet show decreased between-network connectivity compared with TD adolescents. Adults with ASD show no within- or

  5. Cost-effective Sampling Design Applied to Large-scale Monitoring of Boreal Birds

    Directory of Open Access Journals (Sweden)

    Matthew Carlson


    Full Text Available Despite their important roles in biodiversity conservation, large-scale ecological monitoring programs are scarce, in large part due to the difficulty of achieving an effective design under fiscal constraints. Using long-term avian monitoring in the boreal forest of Alberta, Canada as an example, we present a methodology that uses power analysis, statistical modeling, and partial derivatives to identify cost-effective sampling strategies for ecological monitoring programs. Empirical parameter estimates were used in simulations that estimated the power of sampling designs to detect trend in a variety of species' populations and community metrics. The ability to detect trend with increased sample effort depended on the monitoring target's variability and how effort was allocated to sampling parameters. Power estimates were used to develop nonlinear models of the relationship between sample effort and power. A cost model was also developed, and partial derivatives of the power and cost models were evaluated to identify two cost-effective avian sampling strategies. For decreasing sample error, sampling multiple plots at a site is preferable to multiple within-year visits to the site, and many sites should be sampled relatively infrequently rather than sampling few sites frequently, although the importance of frequent sampling increases for variable targets. We end by stressing the need for long-term, spatially extensive data for additional taxa, and by introducing optimal design as an alternative to power analysis for the evaluation of ecological monitoring program designs.

  6. Incorporating Direct Rapid Immunohistochemical Testing into Large-Scale Wildlife Rabies Surveillance

    Directory of Open Access Journals (Sweden)

    Kevin Middel


    Full Text Available Following an incursion of the mid-Atlantic raccoon variant of the rabies virus into southern Ontario, Canada, in late 2015, the direct rapid immunohistochemical test for rabies (dRIT was employed on a large scale to establish the outbreak perimeter and to diagnose specific cases to inform rabies control management actions. In a 17-month period, 5800 wildlife carcasses were tested using the dRIT, of which 307 were identified as rabid. When compared with the gold standard fluorescent antibody test (FAT, the dRIT was found to have a sensitivity of 100% and a specificity of 98.2%. Positive and negative test agreement was shown to be 98.3% and 99.1%, respectively, with an overall test agreement of 98.8%. The average cost to test a sample was $3.13 CAD for materials, and hands-on technical time to complete the test is estimated at 0.55 h. The dRIT procedure was found to be accurate, fast, inexpensive, easy to learn and perform, and an excellent tool for monitoring the progression of a wildlife rabies incursion.

  7. JSTOR: Large Scale Digitization of Journals in the United States

    Directory of Open Access Journals (Sweden)

    Kevin M. Guthrie


    Full Text Available The JSTOR database now includes well over 2 million pages from 61 important journals in 13 academic disciplines. Additional journal content is being digitized at a rate of more than 100,000 pages per month. More than 320 libraries in the United States and Canada have become participating institutions, providing support for the creation, maintenance and growth of this database. Outside of North America, we have established a mirror site in the United Kingdom. Through a novel collaborative relationship with the Joint Information Systems Committee, the JSTOR database is now being made available to over 20 higher education institutions in England, Scotland, Wales and Northern Ireland from a mirror site at the University of Manchester. In addition, plans are underway to establish a second overseas mirror site in Budapest, Hungary to serve institutions in Eastern Europe and Russia. As each day passes, new opportunities are presented to us to extend the reach of this enterprise. It is an exciting and challenging time.

  8. Large-Scale Sequencing: The Future of Genomic Sciences Colloquium

    Energy Technology Data Exchange (ETDEWEB)

    Margaret Riley; Merry Buckley


    Genetic sequencing and the various molecular techniques it has enabled have revolutionized the field of microbiology. Examining and comparing the genetic sequences borne by microbes - including bacteria, archaea, viruses, and microbial eukaryotes - provides researchers insights into the processes microbes carry out, their pathogenic traits, and new ways to use microorganisms in medicine and manufacturing. Until recently, sequencing entire microbial genomes has been laborious and expensive, and the decision to sequence the genome of an organism was made on a case-by-case basis by individual researchers and funding agencies. Now, thanks to new technologies, the cost and effort of sequencing is within reach for even the smallest facilities, and the ability to sequence the genomes of a significant fraction of microbial life may be possible. The availability of numerous microbial genomes will enable unprecedented insights into microbial evolution, function, and physiology. However, the current ad hoc approach to gathering sequence data has resulted in an unbalanced and highly biased sampling of microbial diversity. A well-coordinated, large-scale effort to target the breadth and depth of microbial diversity would result in the greatest impact. The American Academy of Microbiology convened a colloquium to discuss the scientific benefits of engaging in a large-scale, taxonomically-based sequencing project. A group of individuals with expertise in microbiology, genomics, informatics, ecology, and evolution deliberated on the issues inherent in such an effort and generated a set of specific recommendations for how best to proceed. The vast majority of microbes are presently uncultured and, thus, pose significant challenges to such a taxonomically-based approach to sampling genome diversity. However, we have yet to even scratch the surface of the genomic diversity among cultured microbes. A coordinated sequencing effort of cultured organisms is an appropriate place to begin

  9. Automatic Selection of Order Parameters in the Analysis of Large Scale Molecular Dynamics Simulations. (United States)

    Sultan, Mohammad M; Kiss, Gert; Shukla, Diwakar; Pande, Vijay S


    Given the large number of crystal structures and NMR ensembles that have been solved to date, classical molecular dynamics (MD) simulations have become powerful tools in the atomistic study of the kinetics and thermodynamics of biomolecular systems on ever increasing time scales. By virtue of the high-dimensional conformational state space that is explored, the interpretation of large-scale simulations faces difficulties not unlike those in the big data community. We address this challenge by introducing a method called clustering based feature selection (CB-FS) that employs a posterior analysis approach. It combines supervised machine learning (SML) and feature selection with Markov state models to automatically identify the relevant degrees of freedom that separate conformational states. We highlight the utility of the method in the evaluation of large-scale simulations and show that it can be used for the rapid and automated identification of relevant order parameters involved in the functional transitions of two exemplary cell-signaling proteins central to human disease states.

  10. Implementation of highly parallel and large scale GW calculations within the OpenAtom software (United States)

    Ismail-Beigi, Sohrab

    The need to describe electronic excitations with better accuracy than provided by band structures produced by Density Functional Theory (DFT) has been a long-term enterprise for the computational condensed matter and materials theory communities. In some cases, appropriate theoretical frameworks have existed for some time but have been difficult to apply widely due to computational cost. For example, the GW approximation incorporates a great deal of important non-local and dynamical electronic interaction effects but has been too computationally expensive for routine use in large materials simulations. OpenAtom is an open source massively parallel ab initiodensity functional software package based on plane waves and pseudopotentials ( that takes advantage of the Charm + + parallel framework. At present, it is developed via a three-way collaboration, funded by an NSF SI2-SSI grant (ACI-1339804), between Yale (Ismail-Beigi), IBM T. J. Watson (Glenn Martyna) and the University of Illinois at Urbana Champaign (Laxmikant Kale). We will describe the project and our current approach towards implementing large scale GW calculations with OpenAtom. Potential applications of large scale parallel GW software for problems involving electronic excitations in semiconductor and/or metal oxide systems will be also be pointed out.

  11. Test Problems for Large-Scale Multiobjective and Many-Objective Optimization. (United States)

    Cheng, Ran; Jin, Yaochu; Olhofer, Markus; Sendhoff, Bernhard


    The interests in multiobjective and many-objective optimization have been rapidly increasing in the evolutionary computation community. However, most studies on multiobjective and many-objective optimization are limited to small-scale problems, despite the fact that many real-world multiobjective and many-objective optimization problems may involve a large number of decision variables. As has been evident in the history of evolutionary optimization, the development of evolutionary algorithms (EAs) for solving a particular type of optimization problems has undergone a co-evolution with the development of test problems. To promote the research on large-scale multiobjective and many-objective optimization, we propose a set of generic test problems based on design principles widely used in the literature of multiobjective and many-objective optimization. In order for the test problems to be able to reflect challenges in real-world applications, we consider mixed separability between decision variables and nonuniform correlation between decision variables and objective functions. To assess the proposed test problems, six representative evolutionary multiobjective and many-objective EAs are tested on the proposed test problems. Our empirical results indicate that although the compared algorithms exhibit slightly different capabilities in dealing with the challenges in the test problems, none of them are able to efficiently solve these optimization problems, calling for the need for developing new EAs dedicated to large-scale multiobjective and many-objective optimization.

  12. Cooperation, collective action, and the archeology of large-scale societies. (United States)

    Carballo, David M; Feinman, Gary M


    Archeologists investigating the emergence of large-scale societies in the past have renewed interest in examining the dynamics of cooperation as a means of understanding societal change and organizational variability within human groups over time. Unlike earlier approaches to these issues, which used models designated voluntaristic or managerial, contemporary research articulates more explicitly with frameworks for cooperation and collective action used in other fields, thereby facilitating empirical testing through better definition of the costs, benefits, and social mechanisms associated with success or failure in coordinated group action. Current scholarship is nevertheless bifurcated along lines of epistemology and scale, which is understandable but problematic for forging a broader, more transdisciplinary field of cooperation studies. Here, we point to some areas of potential overlap by reviewing archeological research that places the dynamics of social cooperation and competition in the foreground of the emergence of large-scale societies, which we define as those having larger populations, greater concentrations of political power, and higher degrees of social inequality. We focus on key issues involving the communal-resource management of subsistence and other economic goods, as well as the revenue flows that undergird political institutions. Drawing on archeological cases from across the globe, with greater detail from our area of expertise in Mesoamerica, we offer suggestions for strengthening analytical methods and generating more transdisciplinary research programs that address human societies across scalar and temporal spectra. © 2016 Wiley Periodicals, Inc.

  13. The Effective Field Theory of Large Scale Structures at Two Loops

    CERN Document Server

    Carrasco, John Joseph M.; Green, Daniel; Senatore, Leonardo


    Large scale structure surveys promise to be the next leading probe of cosmological information. It is therefore crucial to reliably predict their observables. The Effective Field Theory of Large Scale Structures (EFTofLSS) provides a manifestly convergent perturbation theory for the weakly non-linear regime of dark matter, where correlation functions are computed in an expansion of the wavenumber k of a mode over the wavenumber associated with the non-linear scale k_nl. Since most of the information is contained at high wavenumbers, it is necessary to compute higher order corrections to correlation functions. After the one-loop correction to the matter power spectrum, we estimate that the next leading one is the two-loop contribution, which we compute here. At this order in k/k_nl, there is only one counterterm in the EFTofLSS that must be included, though this term contributes both at tree-level and in several one-loop diagrams. We also discuss correlation functions involving the velocity and momentum fields...

  14. Systems metabolic engineering of microorganisms to achieve large-scale production of flavonoid scaffolds. (United States)

    Wu, Junjun; Du, Guocheng; Zhou, Jingwen; Chen, Jian


    Flavonoids possess pharmaceutical potential due to their health-promoting activities. The complex structures of these products make extraction from plants difficult, and chemical synthesis is limited because of the use of many toxic solvents. Microbial production offers an alternate way to produce these compounds on an industrial scale in a more economical and environment-friendly manner. However, at present microbial production has been achieved only on a laboratory scale and improvements and scale-up of these processes remain challenging. Naringenin and pinocembrin, which are flavonoid scaffolds and precursors for most of the flavonoids, are the model molecules that are key to solving the current issues restricting industrial production of these chemicals. The emergence of systems metabolic engineering, which combines systems biology with synthetic biology and evolutionary engineering at the systems level, offers new perspectives on strain and process optimization. In this review, current challenges in large-scale fermentation processes involving flavonoid scaffolds and the strategies and tools of systems metabolic engineering used to overcome these challenges are summarized. This will offer insights into overcoming the limitations and challenges of large-scale microbial production of these important pharmaceutical compounds. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. User-Developer Communication in Large-Scale IT Projects


    Abelein, Ulrike


    User participation and involvement in software development has been studied for a long time and is considered essential for a successful software system. The positive effects of involving users in software development include improving quality in light of information about precise requirements, avoiding unnecessarily expensive features through enhanced aligment between developers and users, creating a positive attitude toward the system among users, and enabling effective use of the system. H...

  16. Landsat 7 Reveals Large-scale Fractal Motion of Clouds (United States)


    get carried along within the vortices, but these are soon mixed into the surrounding clouds. Landsat is unique in its ability to image both the small-scale eddies that mix clear and cloudy air, down to the 30 meter pixel size of Landsat, but also having a wide enough field-of-view, 180 km, to reveal the connection of the turbulence to large-scale flows such as the subtropical oceanic gyres. Landsat 7, with its new onboard digital recorder, has extended this capability away from the few Landsat ground stations to remote areas such as Alejandro Island, and thus is gradually providing a global dynamic picture of evolving human-scale phenomena. (For more details on von Karman vortices, refer to Image and caption courtesy Bob Cahalan, NASA GSFC

  17. Large-Scale Impact Cratering and Early Earth Evolution (United States)

    Grieve, R. A. F.; Cintala, M. J.


    The surface of the Moon attests to the importance of large-scale impact in its early crustal evolution. Previous models of the effects of a massive bombardment on terrestrial crustal evolution have relied on analogies with the Moon, with allowances for the presence of water and a thinner lithosphere. It is now apparent that strict lunar-terrestrial analogies are incorrect because of the "differential scaling" of crater dimensions and melt volumes with event size and planetary gravity. Impact melt volumes and "ancient cavity dimensions for specific impacts were modeled according to previous procedures. In the terrestrial case, the melt volume (V(sub m)) exceeds that of the transient cavity (V(sub tc)) at diameters > or = 400 km. This condition is reached on the Moon only with transient cavity diameters > or = 3000 km, equivalent to whole Moon melting. The melt volumes in these large impact events are minimum estimates, since, at these sizes, the higher temperature of the target rocks at depth will increase melt production. Using the modification-scaling relation of Croft, a transient cavity diameter of about 400 km in the terrestrial environment corresponds to an expected final impact "basin" diameter of about 900 km. Such a "basin" would be comparable in dimensions to the lunar basin Orientale. This 900-km "basin" on the early Earth, however, would not have had the appearance of Orientale. It would have been essentially a melt pool, and, morphologically, would have had more in common with the palimpsests structures on Callisto and Ganymede. With the terrestrial equivalents to the large multiring basins of the Moon being manifested as muted palimpsest-like structures filled with impact melt, it is unlikely they played a role in establishing the freeboard on the early Earth. The composition of the massive impact melt sheets (> 10 (exp 7) cu km) produced in "basin-forming" events on the early Earth would have most likely ranged from basaltic to more mafic for the

  18. Features of the method of large-scale paleolandscape reconstructions (United States)

    Nizovtsev, Vyacheslav; Erman, Natalia; Graves, Irina


    The method of paleolandscape reconstructions was tested in the key area of the basin of the Central Dubna, located at the junction of the Taldom and Sergiev Posad districts of the Moscow region. A series of maps was created which shows paleoreconstructions of the original (indigenous) living environment of initial settlers during main time periods of the Holocene age and features of human interaction with landscapes at the early stages of economic development of the territory (in the early and middle Holocene). The sequence of these works is as follows. 1. Comprehensive analysis of topographic maps of different scales and aerial and satellite images, stock materials of geological and hydrological surveys and prospecting of peat deposits, archaeological evidence on ancient settlements, palynological and osteological analysis, analysis of complex landscape and archaeological studies. 2. Mapping of factual material and analyzing of the spatial distribution of archaeological sites were performed. 3. Running of a large-scale field landscape mapping (sample areas) and compiling of maps of the modern landscape structure. On this basis, edaphic properties of the main types of natural boundaries were analyzed and their resource base was determined. 4. Reconstruction of lake-river system during the main periods of the Holocene. The boundaries of restored paleolakes were determined based on power and territorial confinement of decay ooze. 5. On the basis of landscape and edaphic method the actual paleolandscape reconstructions for the main periods of the Holocene were performed. During the reconstructions of the original, indigenous flora we relied on data of palynological studies conducted on the studied area or in similar landscape conditions. 6. The result was a retrospective analysis and periodization of the settlement process, economic development and the formation of the first anthropogenically transformed landscape complexes. The reconstruction of the dynamics of the

  19. Developing A Large-Scale, Collaborative, Productive Geoscience Education Network (United States)

    Manduca, C. A.; Bralower, T. J.; Egger, A. E.; Fox, S.; Ledley, T. S.; Macdonald, H.; Mcconnell, D. A.; Mogk, D. W.; Tewksbury, B. J.


    Over the past 15 years, the geoscience education community has grown substantially and developed broad and deep capacity for collaboration and dissemination of ideas. While this community is best viewed as emergent from complex interactions among changing educational needs and opportunities, we highlight the role of several large projects in the development of a network within this community. In the 1990s, three NSF projects came together to build a robust web infrastructure to support the production and dissemination of on-line resources: On The Cutting Edge (OTCE), Earth Exploration Toolbook, and Starting Point: Teaching Introductory Geoscience. Along with the contemporaneous Digital Library for Earth System Education, these projects engaged geoscience educators nationwide in exploring professional development experiences that produced lasting on-line resources, collaborative authoring of resources, and models for web-based support for geoscience teaching. As a result, a culture developed in the 2000s in which geoscience educators anticipated that resources for geoscience teaching would be shared broadly and that collaborative authoring would be productive and engaging. By this time, a diverse set of examples demonstrated the power of the web infrastructure in supporting collaboration, dissemination and professional development . Building on this foundation, more recent work has expanded both the size of the network and the scope of its work. Many large research projects initiated collaborations to disseminate resources supporting educational use of their data. Research results from the rapidly expanding geoscience education research community were integrated into the Pedagogies in Action website and OTCE. Projects engaged faculty across the nation in large-scale data collection and educational research. The Climate Literacy and Energy Awareness Network and OTCE engaged community members in reviewing the expanding body of on-line resources. Building Strong


    Energy Technology Data Exchange (ETDEWEB)

    Koopman, D.; Martino, C.; Poirier, M.


    Large Scale Integrated Testing (LSIT) is being planned by Bechtel National, Inc. to address uncertainties in the full scale mixing performance of the Hanford Waste Treatment and Immobilization Plant (WTP). Testing will use simulated waste rather than actual Hanford waste. Therefore, the use of suitable simulants is critical to achieving the goals of the test program. External review boards have raised questions regarding the overall representativeness of simulants used in previous mixing tests. Accordingly, WTP requested the Savannah River National Laboratory (SRNL) to assist with development of simulants for use in LSIT. Among the first tasks assigned to SRNL was to develop a list of waste properties that matter to pulse-jet mixer (PJM) mixing of WTP tanks. This report satisfies Commitment of the Department of Energy Implementation Plan for Defense Nuclear Facilities Safety Board Recommendation 2010-2: physical properties important to mixing and scaling. In support of waste simulant development, the following two objectives are the focus of this report: (1) Assess physical and chemical properties important to the testing and development of mixing scaling relationships; (2) Identify the governing properties and associated ranges for LSIT to achieve the Newtonian and non-Newtonian test objectives. This includes the properties to support testing of sampling and heel management systems. The test objectives for LSIT relate to transfer and pump out of solid particles, prototypic integrated operations, sparger operation, PJM controllability, vessel level/density measurement accuracy, sampling, heel management, PJM restart, design and safety margin, Computational Fluid Dynamics (CFD) Verification and Validation (V and V) and comparison, performance testing and scaling, and high temperature operation. The slurry properties that are most important to Performance Testing and Scaling depend on the test objective and rheological classification of the slurry (i

  1. Detection of correlated galaxy ellipticities from CFHT data: first evidence for gravitational lensing by large-scale structures (United States)

    Van Waerbeke, L.; Mellier, Y.; Erben, T.; Cuillandre, J. C.; Bernardeau, F.; Maoli, R.; Bertin, E.; McCracken, H. J.; Le Fèvre, O.; Fort, B.; Dantel-Fort, M.; Jain, B.; Schneider, P.


    We report the detection of a significant (5.5 sigma ) excess of correlations between galaxy ellipticities at scales ranging from 0.5 to 3.5 arc-minutes. This detection of a gravitational lensing signal by large-scale structure was made using a composite high quality imaging survey of 6300 arcmin2 obtained at the Canada France Hawaii Telescope (CFHT) with the UH8K and CFH12K panoramic CCD cameras. The amplitude of the excess correlation is 2.2+/- 0.2% at 1 arcmin scale, in agreement with theoretical predictions of the lensing effect induced by large-scale structure. We provide a quantitative analysis of systematics which could contribute to the signal and show that the net effect is small and can be corrected for. In particular, we show that the spurious excess of correlations caused by the residual of the anisotropic Point Spread Function (PSF) correction is well below the measured signal. We show that the measured ellipticity correlations behave as expected for a gravitational shear signal. The relatively small size of our survey precludes tight constraints on cosmological models. However the data are in favor of cluster normalized cosmological models, and marginally reject Cold Dark Matter models with (Omega =0.3, sigma_8 technical feasibility of using weak lensing surveys to measure dark matter clustering and the potential for cosmological parameter measurements, in particular with upcoming wide field CCD cameras. Based on observations obtained at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council of Canada (NRCC), the Institut des Sciences de l'Univers (INSU) of the Centre National de la Recherche Scientifique (CNRS) and the University of Hawaii (UH)


    Directory of Open Access Journals (Sweden)

    Yu. V. Zelenko


    Full Text Available The principles and requirements to conduction of elimination measures after emergency emissions of the oil products are presented. The device for conduction of local cleaning of soil from the light oil products with their utilization is developed.

  3. Parallelization strategy for large-scale vibronic coupling calculations. (United States)

    Rabidoux, Scott M; Eijkhout, Victor; Stanton, John F


    The vibronic coupling model of Köppel, Domcke, and Cederbaum is a powerful means to understand, predict, and analyze electronic spectra of molecules, especially those that exhibit phenomena that involve breakdown of the Born-Oppenheimer approximation. In this work, we describe a new parallel algorithm for carrying out such calculations. The algorithm is conceptually founded upon a "stencil" representation of the required computational steps, which motivates an efficient strategy for coarse-grained parallelization. The equations involved in the direct-CI type diagonalization of the model Hamiltonian are presented, the parallelization strategy is discussed in detail, and the method is illustrated by calculations involving direct-product basis sets with as many as 17 vibrational modes and 130 billion basis functions.

  4. Large Scale Crop Mapping in Ukraine Using Google Earth Engine (United States)

    Shelestov, A.; Lavreniuk, M. S.; Kussul, N.


    -18 July 2014, Quebec City, Canada. F.J. Gallego, N. Kussul, S. Skakun, O. Kravchenko, A. Shelestov, O. Kussul, "Efficiency assessment of using satellite data for crop area estimation in Ukraine," International Journal of Applied Earth Observation and Geoinformation vol. 29, pp. 22-30, 2014.

  5. Technologies and challenges in large-scale phosphoproteomics

    DEFF Research Database (Denmark)

    Engholm-Keller, Kasper; Larsen, Martin Røssel


    , and apoptosis, rely on phosphorylation. This PTM is thus involved in many diseases, rendering localization and assessment of extent of phosphorylation of major scientific interest. MS-based phosphoproteomics, which aims at describing all phosphorylation sites in a specific type of cell, tissue, or organism, has...

  6. Software Tools For Large Scale Interactive Hydrodynamic Modeling

    NARCIS (Netherlands)

    Donchyts, G.; Baart, F.; van Dam, A; Jagers, B; van der Pijl, S.; Piasecki, M.


    Developing easy-to-use software that combines components for simultaneous visualization, simulation and interaction is a great challenge. Mainly, because it involves a number of disciplines, like computational fluid dynamics, computer graphics, high-performance computing. One of the main

  7. Challenges of Modeling Flood Risk at Large Scales (United States)

    Guin, J.; Simic, M.; Rowe, J.


    algorithm propagates the flows for each simulated event. The model incorporates a digital terrain model (DTM) at 10m horizontal resolution, which is used to extract flood plain cross-sections such that a one-dimensional hydraulic model can be used to estimate extent and elevation of flooding. In doing so the effect of flood defenses in mitigating floods are accounted for. Finally a suite of vulnerability relationships have been developed to estimate flood losses for a portfolio of properties that are exposed to flood hazard. Historical experience indicates that a for recent floods in Great Britain more than 50% of insurance claims occur outside the flood plain and these are primarily a result of excess surface flow, hillside flooding, flooding due to inadequate drainage. A sub-component of the model addresses this issue by considering several parameters that best explain the variability of claims off the flood plain. The challenges of modeling such a complex phenomenon at a large scale largely dictate the choice of modeling approaches that need to be adopted for each of these model components. While detailed numerically-based physical models exist and have been used for conducting flood hazard studies, they are generally restricted to small geographic regions. In a probabilistic risk estimation framework like our current model, a blend of deterministic and statistical techniques have to be employed such that each model component is independent, physically sound and is able to maintain the statistical properties of observed historical data. This is particularly important because of the highly non-linear behavior of the flooding process. With respect to vulnerability modeling, both on and off the flood plain, the challenges include the appropriate scaling of a damage relationship when applied to a portfolio of properties. This arises from the fact that the estimated hazard parameter used for damage assessment, namely maximum flood depth has considerable uncertainty. The

  8. Automatic Three-Dimensional Measurement of Large-Scale Structure Based on Vision Metrology

    Directory of Open Access Journals (Sweden)

    Zhaokun Zhu


    Full Text Available All relevant key techniques involved in photogrammetric vision metrology for fully automatic 3D measurement of large-scale structure are studied. A new kind of coded target consisting of circular retroreflective discs is designed, and corresponding detection and recognition algorithms based on blob detection and clustering are presented. Then a three-stage strategy starting with view clustering is proposed to achieve automatic network orientation. As for matching of noncoded targets, the concept of matching path is proposed, and matches for each noncoded target are found by determination of the optimal matching path, based on a novel voting strategy, among all possible ones. Experiments on a fixed keel of airship have been conducted to verify the effectiveness and measuring accuracy of the proposed methods.

  9. Automatic Three-Dimensional Measurement of Large-Scale Structure Based on Vision Metrology (United States)

    Zhu, Zhaokun; Guan, Banglei; Zhang, Xiaohu; Li, Daokui; Yu, Qifeng


    All relevant key techniques involved in photogrammetric vision metrology for fully automatic 3D measurement of large-scale structure are studied. A new kind of coded target consisting of circular retroreflective discs is designed, and corresponding detection and recognition algorithms based on blob detection and clustering are presented. Then a three-stage strategy starting with view clustering is proposed to achieve automatic network orientation. As for matching of noncoded targets, the concept of matching path is proposed, and matches for each noncoded target are found by determination of the optimal matching path, based on a novel voting strategy, among all possible ones. Experiments on a fixed keel of airship have been conducted to verify the effectiveness and measuring accuracy of the proposed methods. PMID:24701143

  10. The Laser Button: A Novel Approach To The Large Scale Replication Of Holograms (United States)

    Cowan, James J.


    The replication of holograms on a large scale requires the successful completion of many intricate steps from the initial artistic concept to the finished product. Complicated manufacturing processes must be done efficiently and economically to produce replicas displayed in an esthetically pleasing manner and with the maximum possible optical effect. A description is made of the "high tech laser button" that was designed as a souvenir for the recent World's Fair and the stages involved in its replication. In the basic design, an intricate overlay of multiple holographic gratings onto a recording medium was required. From this a durable metal master was made that was subsequently used to emboss the pattern into a thin plastic sheet. The resulting metallized plastic was die cut and mounted onto a button. The practical and artistic reasons for mounting the holograms in this fashion are discussed. Also considered are subsequent, more advanced designs and the unique optical effects that result from them.

  11. Implementation of an effective hybrid GA for large-scale traveling salesman problems. (United States)

    Nguyen, Hung Dinh; Yoshihara, Ikuo; Yamamori, Kunihito; Yasunaga, Moritoshi


    This correspondence describes a hybrid genetic algorithm (GA) to find high-quality solutions for the traveling salesman problem (TSP). The proposed method is based on a parallel implementation of a multipopulation steady-state GA involving local search heuristics. It uses a variant of the maximal preservative crossover and the double-bridge move mutation. An effective implementation of the Lin-Kernighan heuristic (LK) is incorporated into the method to compensate for the GA's lack of local search ability. The method is validated by comparing it with the LK-Helsgaun method (LKH), which is one of the most effective methods for the TSP. Experimental results with benchmarks having up to 316228 cities show that the proposed method works more effectively and efficiently than LKH when solving large-scale problems. Finally, the method is used together with the implementation of the iterated LK to find a new best tour (as of June 2, 2003) for a 1904711-city TSP challenge.

  12. Scaled first-order methods for a class of large-scale constrained least square problems (United States)

    Coli, Vanna Lisa; Ruggiero, Valeria; Zanni, Luca


    Typical applications in signal and image processing often require the numerical solution of large-scale linear least squares problems with simple constraints, related to an m × n nonnegative matrix A, m « n. When the size of A is such that the matrix is not available in memory and only the operators of the matrix-vector products involving A and AT can be computed, forward-backward methods combined with suitable accelerating techniques are very effective; in particular, the gradient projection methods can be improved by suitable step-length rules or by an extrapolation/inertial step. In this work, we propose a further acceleration technique for both schemes, based on the use of variable metrics tailored for the considered problems. The numerical effectiveness of the proposed approach is evaluated on randomly generated test problems and real data arising from a problem of fibre orientation estimation in diffusion MRI.

  13. Single-field consistency relations of large scale structure part III: test of the equivalence principle

    Energy Technology Data Exchange (ETDEWEB)

    Creminelli, Paolo [Abdus Salam International Centre for Theoretical Physics, Strada Costiera 11, Trieste, 34151 (Italy); Gleyzes, Jérôme; Vernizzi, Filippo [CEA, Institut de Physique Théorique, Gif-sur-Yvette cédex, F-91191 France (France); Hui, Lam [Physics Department and Institute for Strings, Cosmology and Astroparticle Physics, Columbia University, New York, NY, 10027 (United States); Simonović, Marko, E-mail:, E-mail:, E-mail:, E-mail:, E-mail: [SISSA, via Bonomea 265, Trieste, 34136 (Italy)


    The recently derived consistency relations for Large Scale Structure do not hold if the Equivalence Principle (EP) is violated. We show it explicitly in a toy model with two fluids, one of which is coupled to a fifth force. We explore the constraints that galaxy surveys can set on EP violation looking at the squeezed limit of the 3-point function involving two populations of objects. We find that one can explore EP violations of order 10{sup −3}÷10{sup −4} on cosmological scales. Chameleon models are already very constrained by the requirement of screening within the Solar System and only a very tiny region of the parameter space can be explored with this method. We show that no violation of the consistency relations is expected in Galileon models.


    Directory of Open Access Journals (Sweden)

    Sunduck D. SUH, Ph.D., P.E.


    Full Text Available Risk management experiences of the Korean Seoul-Pusan high-speed railway (KTX project since the planning stage are evaluated. One can clearly see the interplay of engineering and construction risks, financial risks and political risks in the development of the KTX project, which is the peculiarity of large-scale new railway system projects. A brief description on evaluation methodology and overview of the project is followed by detailed evaluations on key differences in risks between conventional railway system and high-speed railway system, social and political risks, engineering and construction risks, and financial risks. Risks involved in system procurement process, such as proposal solicitation, evaluation, selection, and scope of solicitation are separated out and evaluated in depth. Detailed events resulting from these issues are discussed along with their possible impact on system risk. Lessons learned and further possible refinements are also discussed.

  15. Growth and luminescence characterization of large-scale zinc oxide nanowires

    CERN Document Server

    Dai, L; Wang, W J; Zhou, T; Hu, B Q


    Large-scale zinc oxide (ZnO) nanowires were grown via a simple chemical reaction involving water vapour. Electron microscopy observations reveal that the ZnO nanowires are single crystalline and grow along the c-axis ([001]) direction. Room temperature photoluminescence measurements show a striking blue emission at 466 nm along with two other emissions in the ultraviolet and yellow regions. Annealing treatment of the as-grown ZnO nanowires results in an apparent reduction of the intensity of the blue emission, which indicates that the blue emission might be originating from the oxygen or zinc defects generated in the process of growth of the ZnO nanowires.

  16. Handling Big Data in Medical Imaging: Iterative Reconstruction with Large-Scale Automated Parallel Computation. (United States)

    Lee, Jae H; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T; Seo, Youngho


    The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting.


    Directory of Open Access Journals (Sweden)

    Mehmet Tolga Taner


    Full Text Available This paper aims to investigate the Critical Success Factors (CSFs for the successful introduction of Six Sigma in large-scale Turkish construction companies. A survey-based approach is used in order to identify and understand the current quality practices. CSFs and impeding factors are identified and analysed. Involvement and commitment of top management, linking quality initiatives to customer and linking quality initiatives to supplier are found to be the most important CSFs to the construction companies. Leadership and commitment of top management, cross-functional teamwork and commitment of middle managers are found to be the most CSFs for successful introduction of Six Sigma, whereas lack of knowledge of the system to initiate and complacency are found to be hindering its implementation. High costs and high amount of waste are found to lower the performance of Turkish construction companies.

  18. Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study

    Directory of Open Access Journals (Sweden)

    Jianda Han


    Full Text Available One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point (ICP method for the fast and accurate registration of 3D environmental models. First, a hierarchical searching scheme is combined with the octree-based ICP algorithm. Second, an early-warning mechanism is used to perceive the local minimum problem. Third, a heuristic escape scheme based on sampled potential transformation vectors is used to avoid local minima and achieve optimal registration. Experiments involving one unmanned aerial vehicle and one unmanned surface vehicle were conducted to verify the proposed technique. The experimental results were compared with those of normal ICP registration algorithms to demonstrate the superior performance of the proposed method.

  19. Are large-scale flow experiments informing the science and management of freshwater ecosystems? (United States)

    Olden, Julian D.; Konrad, Christopher P.; Melis, Theodore S.; Kennard, Mark J.; Freeman, Mary C.; Mims, Meryl C.; Bray, Erin N.; Gido, Keith B.; Hemphill, Nina P.; Lytle, David A.; McMullen, Laura E.; Pyron, Mark; Robinson, Christopher T.; Schmidt, John C.; Williams, John G.


    Greater scientific knowledge, changing societal values, and legislative mandates have emphasized the importance of implementing large-scale flow experiments (FEs) downstream of dams. We provide the first global assessment of FEs to evaluate their success in advancing science and informing management decisions. Systematic review of 113 FEs across 20 countries revealed that clear articulation of experimental objectives, while not universally practiced, was crucial for achieving management outcomes and changing dam-operating policies. Furthermore, changes to dam operations were three times less likely when FEs were conducted primarily for scientific purposes. Despite the recognized importance of riverine flow regimes, four-fifths of FEs involved only discrete flow events. Over three-quarters of FEs documented both abiotic and biotic outcomes, but only one-third examined multiple taxonomic responses, thus limiting how FE results can inform holistic dam management. Future FEs will present new opportunities to advance scientifically credible water policies.

  20. Influence of oxygen in architecting large scale nonpolar GaN nanowires

    CERN Document Server

    Patsha, Avinash; Pandian, Ramanathaswamy; Dhara, S


    Manipulation of surface architecture of semiconducting nanowires with a control in surface polarity is one of the important objectives for nanowire based electronic and optoelectronic devices for commercialization. We report the growth of exceptionally high structural and optical quality nonpolar GaN nanowires with controlled and uniform surface morphology and size distribution, for large scale production. The role of O contamination (~1-10^5 ppm) in the surface architecture of these nanowires is investigated with the possible mechanism involved. Nonpolar GaN nanowires grown in O rich condition show the inhomogeneous surface morphologies and sizes (50 - 150 nm) while nanowires are having precise sizes of 40(5) nm and uniform surface morphology, for the samples grown in O reduced condition. Relative O contents are estimated using electron energy loss spectroscopy studies. Size-selective growth of uniform nanowires is also demonstrated, in the O reduced condition, using different catalyst sizes. Photoluminescen...