WorldWideScience

Sample records for multielement large-scale analysis

  1. A study of possibility of application of instrumental NNA for multielement large-scale analysis of materials important for ecology and geology; Rozpoznanie mozliwosci zastosowania instrumentalnej wersji NNA do wielopierwiastkowej analizy duzych serii materialow waznych z punktu widzenia ekologii i geologii

    Energy Technology Data Exchange (ETDEWEB)

    Wasek, M.; Szopa, Z.; Dybczynski, R.

    1997-12-31

    A general scheme of INAA used in the Laboratory of Radiometric Methods of the INCT as well as present status of equipment and software serving for processing and interpretation of gamma-ray spectra have been presented. Sequential stages of multielement analysis were described, especially those were potential systematic and random errors that might affect the analysis reliability could be expected. Conclusions estimating the possibilities of INNA analyses carried out at present conditions have been drawn as well as future needs concerning both equipment and software serving for large-scale multielement routine analyses have been pointed out. (author). 13 refs, 5 figs, 3 tabs.

  2. Thoughts on multielement analysis

    International Nuclear Information System (INIS)

    Kaiser, H.

    1976-01-01

    The author discusses, in an informal fashion, some of the important aspects of multielement analysis that are frequently overlooked in the present-day trend of trying to measure everything (elements, compounds) in everything (environmental samples). While many points are touched upon, with the aim of providing 'fuel' for the subsequent General Discussion, two themes are illustrated in some depth; do our backgrounds spoil our results, and do our experiments require the impossible A base for planning experimental strategy is outlined. (author)

  3. Large-Scale Analysis of Art Proportions

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2014-01-01

    While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square) and with majo......While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square...

  4. Sensitivity analysis for large-scale problems

    Science.gov (United States)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  5. Large Scale EOF Analysis of Climate Data

    Science.gov (United States)

    Prabhat, M.; Gittens, A.; Kashinath, K.; Cavanaugh, N. R.; Mahoney, M.

    2016-12-01

    We present a distributed approach towards extracting EOFs from 3D climate data. We implement the method in Apache Spark, and process multi-TB sized datasets on O(1000-10,000) cores. We apply this method to latitude-weighted ocean temperature data from CSFR, a 2.2 terabyte-sized data set comprising ocean and subsurface reanalysis measurements collected at 41 levels in the ocean, at 6 hour intervals over 31 years. We extract the first 100 EOFs of this full data set and compare to the EOFs computed simply on the surface temperature field. Our analyses provide evidence of Kelvin and Rossy waves and components of large-scale modes of oscillation including the ENSO and PDO that are not visible in the usual SST EOFs. Further, they provide information on the the most influential parts of the ocean, such as the thermocline, that exist below the surface. Work is ongoing to understand the factors determining the depth-varying spatial patterns observed in the EOFs. We will experiment with weighting schemes to appropriately account for the differing depths of the observations. We also plan to apply the same distributed approach to analysis of analysis of 3D atmospheric climatic data sets, including multiple variables. Because the atmosphere changes on a quicker time-scale than the ocean, we expect that the results will demonstrate an even greater advantage to computing 3D EOFs in lieu of 2D EOFs.

  6. Analysis using large-scale ringing data

    Directory of Open Access Journals (Sweden)

    Baillie, S. R.

    2004-06-01

    survival and recruitment estimates from the French CES scheme to assess the relative contributions of survival and recruitment to overall population changes. He develops a novel approach to modelling survival rates from such multi–site data by using within–year recaptures to provide a covariate of between–year recapture rates. This provided parsimonious models of variation in recapture probabilities between sites and years. The approach provides promising results for the four species investigated and can potentially be extended to similar data from other CES/MAPS schemes. The final paper by Blandine Doligez, David Thomson and Arie van Noordwijk (Doligez et al., 2004 illustrates how large-scale studies of population dynamics can be important for evaluating the effects of conservation measures. Their study is concerned with the reintroduction of White Stork populations to the Netherlands where a re–introduction programme started in 1969 had resulted in a breeding population of 396 pairs by 2000. They demonstrate the need to consider a wide range of models in order to account for potential age, time, cohort and “trap–happiness” effects. As the data are based on resightings such trap–happiness must reflect some form of heterogeneity in resighting probabilities. Perhaps surprisingly, the provision of supplementary food did not influence survival, but it may havehad an indirect effect via the alteration of migratory behaviour. Spatially explicit modelling of data gathered at many sites inevitably results in starting models with very large numbers of parameters. The problem is often complicated further by having relatively sparse data at each site, even where the total amount of data gathered is very large. Both Julliard (2004 and Doligez et al. (2004 give explicit examples of problems caused by needing to handle very large numbers of parameters and show how they overcame them for their particular data sets. Such problems involve both the choice of appropriate

  7. Fatigue Analysis of Large-scale Wind turbine

    Directory of Open Access Journals (Sweden)

    Zhu Yongli

    2017-01-01

    Full Text Available The paper does research on top flange fatigue damage of large-scale wind turbine generator. It establishes finite element model of top flange connection system with finite element analysis software MSC. Marc/Mentat, analyzes its fatigue strain, implements load simulation of flange fatigue working condition with Bladed software, acquires flange fatigue load spectrum with rain-flow counting method, finally, it realizes fatigue analysis of top flange with fatigue analysis software MSC. Fatigue and Palmgren-Miner linear cumulative damage theory. The analysis result indicates that its result provides new thinking for flange fatigue analysis of large-scale wind turbine generator, and possesses some practical engineering value.

  8. Large-Scale Analysis of Network Bistability for Human Cancers

    Science.gov (United States)

    Shiraishi, Tetsuya; Matsuyama, Shinako; Kitano, Hiroaki

    2010-01-01

    Protein–protein interaction and gene regulatory networks are likely to be locked in a state corresponding to a disease by the behavior of one or more bistable circuits exhibiting switch-like behavior. Sets of genes could be over-expressed or repressed when anomalies due to disease appear, and the circuits responsible for this over- or under-expression might persist for as long as the disease state continues. This paper shows how a large-scale analysis of network bistability for various human cancers can identify genes that can potentially serve as drug targets or diagnosis biomarkers. PMID:20628618

  9. Combined process automation for large-scale EEG analysis.

    Science.gov (United States)

    Sfondouris, John L; Quebedeaux, Tabitha M; Holdgraf, Chris; Musto, Alberto E

    2012-01-01

    Epileptogenesis is a dynamic process producing increased seizure susceptibility. Electroencephalography (EEG) data provides information critical in understanding the evolution of epileptiform changes throughout epileptic foci. We designed an algorithm to facilitate efficient large-scale EEG analysis via linked automation of multiple data processing steps. Using EEG recordings obtained from electrical stimulation studies, the following steps of EEG analysis were automated: (1) alignment and isolation of pre- and post-stimulation intervals, (2) generation of user-defined band frequency waveforms, (3) spike-sorting, (4) quantification of spike and burst data and (5) power spectral density analysis. This algorithm allows for quicker, more efficient EEG analysis. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Large-scale quantitative analysis of painting arts.

    Science.gov (United States)

    Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong

    2014-12-11

    Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images - the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances.

  11. In situ vitrification large-scale operational acceptance test analysis

    International Nuclear Information System (INIS)

    Buelt, J.L.; Carter, J.G.

    1986-05-01

    A thermal treatment process is currently under study to provide possible enhancement of in-place stabilization of transuranic and chemically contaminated soil sites. The process is known as in situ vitrification (ISV). In situ vitrification is a remedial action process that destroys solid and liquid organic contaminants and incorporates radionuclides into a glass-like material that renders contaminants substantially less mobile and less likely to impact the environment. A large-scale operational acceptance test (LSOAT) was recently completed in which more than 180 t of vitrified soil were produced in each of three adjacent settings. The LSOAT demonstrated that the process conforms to the functional design criteria necessary for the large-scale radioactive test (LSRT) to be conducted following verification of the performance capabilities of the process. The energy requirements and vitrified block size, shape, and mass are sufficiently equivalent to those predicted by the ISV mathematical model to confirm its usefulness as a predictive tool. The LSOAT demonstrated an electrode replacement technique, which can be used if an electrode fails, and techniques have been identified to minimize air oxidation, thereby extending electrode life. A statistical analysis was employed during the LSOAT to identify graphite collars and an insulative surface as successful cold cap subsidence techniques. The LSOAT also showed that even under worst-case conditions, the off-gas system exceeds the flow requirements necessary to maintain a negative pressure on the hood covering the area being vitrified. The retention of simulated radionuclides and chemicals in the soil and off-gas system exceeds requirements so that projected emissions are one to two orders of magnitude below the maximum permissible concentrations of contaminants at the stack

  12. Comparative Analysis of Different Protocols to Manage Large Scale Networks

    OpenAIRE

    Anil Rao Pimplapure; Dr Jayant Dubey; Prashant Sen

    2013-01-01

    In recent year the numbers, complexity and size is increased in Large Scale Network. The best example of Large Scale Network is Internet, and recently once are Data-centers in Cloud Environment. In this process, involvement of several management tasks such as traffic monitoring, security and performance optimization is big task for Network Administrator. This research reports study the different protocols i.e. conventional protocols like Simple Network Management Protocol and newly Gossip bas...

  13. Parallel Index and Query for Large Scale Data Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chou, Jerry; Wu, Kesheng; Ruebel, Oliver; Howison, Mark; Qiang, Ji; Prabhat,; Austin, Brian; Bethel, E. Wes; Ryne, Rob D.; Shoshani, Arie

    2011-07-18

    Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing of a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.

  14. Large-scale exploration and analysis of drug combinations.

    Science.gov (United States)

    Li, Peng; Huang, Chao; Fu, Yingxue; Wang, Jinan; Wu, Ziyin; Ru, Jinlong; Zheng, Chunli; Guo, Zihu; Chen, Xuetong; Zhou, Wei; Zhang, Wenjuan; Li, Yan; Chen, Jianxin; Lu, Aiping; Wang, Yonghua

    2015-06-15

    Drug combinations are a promising strategy for combating complex diseases by improving the efficacy and reducing corresponding side effects. Currently, a widely studied problem in pharmacology is to predict effective drug combinations, either through empirically screening in clinic or pure experimental trials. However, the large-scale prediction of drug combination by a systems method is rarely considered. We report a systems pharmacology framework to predict drug combinations (PreDCs) on a computational model, termed probability ensemble approach (PEA), for analysis of both the efficacy and adverse effects of drug combinations. First, a Bayesian network integrating with a similarity algorithm is developed to model the combinations from drug molecular and pharmacological phenotypes, and the predictions are then assessed with both clinical efficacy and adverse effects. It is illustrated that PEA can predict the combination efficacy of drugs spanning different therapeutic classes with high specificity and sensitivity (AUC = 0.90), which was further validated by independent data or new experimental assays. PEA also evaluates the adverse effects (AUC = 0.95) quantitatively and detects the therapeutic indications for drug combinations. Finally, the PreDC database includes 1571 known and 3269 predicted optimal combinations as well as their potential side effects and therapeutic indications. The PreDC database is available at http://sm.nwsuaf.edu.cn/lsp/predc.php. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. Ion beam analysis techniques applied to large scale pollution studies

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, D D; Bailey, G; Martin, J; Garton, D; Noorman, H; Stelcer, E; Johnson, P [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)

    1994-12-31

    Ion Beam Analysis (IBA) techniques are ideally suited to analyse the thousands of filter papers a year that may originate from a large scale aerosol sampling network. They are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. ANSTO in collaboration with the NSW EPA, Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 80,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP was funded by the Energy Research and Development Corporation (ERDC) and commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 {mu}m particle diameter cut off and runs for 24 hours every Sunday and Wednesday using one Gillman 25mm diameter stretched Teflon filter for each day. These filters are ideal targets for ion beam analysis work. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on the 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. Four simultaneous accelerator based IBA techniques are used at ANSTO, to analyse for the following 24 elements: H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. The IBA techniques were proved invaluable in identifying sources of fine particles and their spatial and seasonal variations accross the large area sampled by the ASP network. 3 figs.

  16. Ion beam analysis techniques applied to large scale pollution studies

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, D.D.; Bailey, G.; Martin, J.; Garton, D.; Noorman, H.; Stelcer, E.; Johnson, P. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)

    1993-12-31

    Ion Beam Analysis (IBA) techniques are ideally suited to analyse the thousands of filter papers a year that may originate from a large scale aerosol sampling network. They are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. ANSTO in collaboration with the NSW EPA, Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 80,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP was funded by the Energy Research and Development Corporation (ERDC) and commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 {mu}m particle diameter cut off and runs for 24 hours every Sunday and Wednesday using one Gillman 25mm diameter stretched Teflon filter for each day. These filters are ideal targets for ion beam analysis work. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on the 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. Four simultaneous accelerator based IBA techniques are used at ANSTO, to analyse for the following 24 elements: H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. The IBA techniques were proved invaluable in identifying sources of fine particles and their spatial and seasonal variations accross the large area sampled by the ASP network. 3 figs.

  17. GAS MIXING ANALYSIS IN A LARGE-SCALED SALTSTONE FACILITY

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S

    2008-05-28

    Computational fluid dynamics (CFD) methods have been used to estimate the flow patterns mainly driven by temperature gradients inside vapor space in a large-scaled Saltstone vault facility at Savannah River site (SRS). The purpose of this work is to examine the gas motions inside the vapor space under the current vault configurations by taking a three-dimensional transient momentum-energy coupled approach for the vapor space domain of the vault. The modeling calculations were based on prototypic vault geometry and expected normal operating conditions as defined by Waste Solidification Engineering. The modeling analysis was focused on the air flow patterns near the ventilated corner zones of the vapor space inside the Saltstone vault. The turbulence behavior and natural convection mechanism used in the present model were benchmarked against the literature information and theoretical results. The verified model was applied to the Saltstone vault geometry for the transient assessment of the air flow patterns inside the vapor space of the vault region using the potential operating conditions. The baseline model considered two cases for the estimations of the flow patterns within the vapor space. One is the reference nominal case. The other is for the negative temperature gradient between the roof inner and top grout surface temperatures intended for the potential bounding condition. The flow patterns of the vapor space calculated by the CFD model demonstrate that the ambient air comes into the vapor space of the vault through the lower-end ventilation hole, and it gets heated up by the Benard-cell type circulation before leaving the vault via the higher-end ventilation hole. The calculated results are consistent with the literature information. Detailed results and the cases considered in the calculations will be discussed here.

  18. Multielement methods of atomic fluorescence analysis of enviromental samples

    International Nuclear Information System (INIS)

    Rigin, V.I.

    1985-01-01

    A multielement method of atomic fluorescence analysis of environmental samples based on sample decomposition by autoclave fluorination and gas-phase atomization of volatile compounds in inductive araon plasma using a nondispersive polychromator is suggested. Detection limits of some elements (Be, Sr, Cd, V, Mo, Te, Ru etc.) for different sample forms introduced in to an analyzer are given

  19. Enabling Large-Scale Biomedical Analysis in the Cloud

    Directory of Open Access Journals (Sweden)

    Ying-Chih Lin

    2013-01-01

    Full Text Available Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable.

  20. Staghorn: An Automated Large-Scale Distributed System Analysis Platform

    Energy Technology Data Exchange (ETDEWEB)

    Gabert, Kasimir [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Burns, Ian [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Elliott, Steven [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kallaher, Jenna [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vail, Adam [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-09-01

    Conducting experiments on large-scale distributed computing systems is becoming significantly easier with the assistance of emulation. Researchers can now create a model of a distributed computing environment and then generate a virtual, laboratory copy of the entire system composed of potentially thousands of virtual machines, switches, and software. The use of real software, running at clock rate in full virtual machines, allows experiments to produce meaningful results without necessitating a full understanding of all model components. However, the ability to inspect and modify elements within these models is bound by the limitation that such modifications must compete with the model, either running in or alongside it. This inhibits entire classes of analyses from being conducted upon these models. We developed a mechanism to snapshot an entire emulation-based model as it is running. This allows us to \\freeze time" and subsequently fork execution, replay execution, modify arbitrary parts of the model, or deeply explore the model. This snapshot includes capturing packets in transit and other input/output state along with the running virtual machines. We were able to build this system in Linux using Open vSwitch and Kernel Virtual Machines on top of Sandia's emulation platform Firewheel. This primitive opens the door to numerous subsequent analyses on models, including state space exploration, debugging distributed systems, performance optimizations, improved training environments, and improved experiment repeatability.

  1. Large scale sample management and data analysis via MIRACLE

    DEFF Research Database (Denmark)

    Block, Ines; List, Markus; Pedersen, Marlene Lemvig

    Reverse-phase protein arrays (RPPAs) allow sensitive quantification of relative protein abundance in thousands of samples in parallel. In the past years the technology advanced based on improved methods and protocols concerning sample preparation and printing, antibody selection, optimization...... of staining conditions and mode of signal analysis. However, the sample management and data analysis still poses challenges because of the high number of samples, sample dilutions, customized array patterns, and various programs necessary for array construction and data processing. We developed...... a comprehensive and user-friendly web application called MIRACLE (MIcroarray R-based Analysis of Complex Lysate Experiments), which bridges the gap between sample management and array analysis by conveniently keeping track of the sample information from lysate preparation, through array construction and signal...

  2. GECKO: a complete large-scale gene expression analysis platform

    Directory of Open Access Journals (Sweden)

    Heuer Michael

    2004-12-01

    Full Text Available Abstract Background Gecko (Gene Expression: Computation and Knowledge Organization is a complete, high-capacity centralized gene expression analysis system, developed in response to the needs of a distributed user community. Results Based on a client-server architecture, with a centralized repository of typically many tens of thousands of Affymetrix scans, Gecko includes automatic processing pipelines for uploading data from remote sites, a data base, a computational engine implementing ~ 50 different analysis tools, and a client application. Among available analysis tools are clustering methods, principal component analysis, supervised classification including feature selection and cross-validation, multi-factorial ANOVA, statistical contrast calculations, and various post-processing tools for extracting data at given error rates or significance levels. On account of its open architecture, Gecko also allows for the integration of new algorithms. The Gecko framework is very general: non-Affymetrix and non-gene expression data can be analyzed as well. A unique feature of the Gecko architecture is the concept of the Analysis Tree (actually, a directed acyclic graph, in which all successive results in ongoing analyses are saved. This approach has proven invaluable in allowing a large (~ 100 users and distributed community to share results, and to repeatedly return over a span of years to older and potentially very complex analyses of gene expression data. Conclusions The Gecko system is being made publicly available as free software http://sourceforge.net/projects/geckoe. In totality or in parts, the Gecko framework should prove useful to users and system developers with a broad range of analysis needs.

  3. Reliability analysis of large scaled structures by optimization technique

    International Nuclear Information System (INIS)

    Ishikawa, N.; Mihara, T.; Iizuka, M.

    1987-01-01

    This paper presents a reliability analysis based on the optimization technique using PNET (Probabilistic Network Evaluation Technique) method for the highly redundant structures having a large number of collapse modes. This approach makes the best use of the merit of the optimization technique in which the idea of PNET method is used. The analytical process involves the minimization of safety index of the representative mode, subjected to satisfaction of the mechanism condition and of the positive external work. The procedure entails the sequential performance of a series of the NLP (Nonlinear Programming) problems, where the correlation condition as the idea of PNET method pertaining to the representative mode is taken as an additional constraint to the next analysis. Upon succeeding iterations, the final analysis is achieved when a collapse probability at the subsequent mode is extremely less than the value at the 1st mode. The approximate collapse probability of the structure is defined as the sum of the collapse probabilities of the representative modes classified by the extent of correlation. Then, in order to confirm the validity of the proposed method, the conventional Monte Carlo simulation is also revised by using the collapse load analysis. Finally, two fairly large structures were analyzed to illustrate the scope and application of the approach. (orig./HP)

  4. Optimization of Large Scale HEP Data Analysis in LHCb

    International Nuclear Information System (INIS)

    Remenska, Daniela; Aaij, Roel; Raven, Gerhard; Merk, Marcel; Templon, Jeff; Bril, Reinder J

    2011-01-01

    Observation has lead to a conclusion that the physics analysis jobs run by LHCb physicists on a local computing farm (i.e. non-grid) require more efficient access to the data which resides on the Grid. Our experiments have shown that the I/O bound nature of the analysis jobs in combination with the latency due to the remote access protocols (e.g. rfio, dcap) cause a low CPU efficiency of these jobs. In addition to causing a low CPU efficiency, the remote access protocols give rise to high overhead (in terms of amount of data transferred). This paper gives an overview of the concept of pre-fetching and caching of input files in the proximity of the processing resources, which is exploited to cope with the I/O bound analysis jobs. The files are copied from Grid storage elements (using GridFTP), while concurrently performing computations, inspired from a similar idea used in the ATLAS experiment. The results illustrate that this file staging approach is relatively insensitive to the original location of the data, and a significant improvement can be achieved in terms of the CPU efficiency of an analysis job. Dealing with scalability of such a solution on the Grid environment is discussed briefly.

  5. Analysis of Decision Making Skills for Large Scale Disaster Response

    Science.gov (United States)

    2015-08-21

    Capability to influence and collaborate Compassion Teamwork Communication Leadership Provide vision of outcome / set priorities Confidence, courage to make...project evaluates the viability of expanding the use of serious games to augment classroom training, tabletop and full scale exercise, and actual...training, evaluation, analysis, and technology ex- ploration. Those techniques have found successful niches, but their wider applicability faces

  6. Large-scale Comparative Sentiment Analysis of News Articles

    OpenAIRE

    Wanner, Franz; Rohrdantz, Christian; Mansmann, Florian; Stoffel, Andreas; Oelke, Daniela; Krstajic, Milos; Keim, Daniel; Luo, Dongning; Yang, Jing; Atkinson, Martin

    2009-01-01

    Online media offers great possibilities to retrieve more news items than ever. In contrast to these technical developments, human capabilities to read all these news items have not increased likewise. To bridge this gap, this poster presents a visual analytics tool for conducting semi-automatic sentiment analysis of large news feeds. The tool retrieves and analyzes the news of two categories (Terrorist Attack and Natural Disasters) and news which belong to both categories of the Europe Media ...

  7. Large Scale Analysis of Geospatial Data with Dask and XArray

    Science.gov (United States)

    Zender, C. S.; Hamman, J.; Abernathey, R.; Evans, K. J.; Rocklin, M.; Zender, C. S.; Rocklin, M.

    2017-12-01

    The analysis of geospatial data with high level languages has acceleratedinnovation and the impact of existing data resources. However, as datasetsgrow beyond single-machine memory, data structures within these high levellanguages can become a bottleneck. New libraries like Dask and XArray resolve some of these scalability issues,providing interactive workflows that are both familiar tohigh-level-language researchers while also scaling out to much largerdatasets. This broadens the access of researchers to larger datasets on highperformance computers and, through interactive development, reducestime-to-insight when compared to traditional parallel programming techniques(MPI). This talk describes Dask, a distributed dynamic task scheduler, Dask.array, amulti-dimensional array that copies the popular NumPy interface, and XArray,a library that wraps NumPy/Dask.array with labeled and indexes axes,implementing the CF conventions. We discuss both the basic design of theselibraries and how they change interactive analysis of geospatial data, and alsorecent benefits and challenges of distributed computing on clusters ofmachines.

  8. A Large-Scale Analysis of Variance in Written Language.

    Science.gov (United States)

    Johns, Brendan T; Jamieson, Randall K

    2018-01-22

    The collection of very large text sources has revolutionized the study of natural language, leading to the development of several models of language learning and distributional semantics that extract sophisticated semantic representations of words based on the statistical redundancies contained within natural language (e.g., Griffiths, Steyvers, & Tenenbaum, ; Jones & Mewhort, ; Landauer & Dumais, ; Mikolov, Sutskever, Chen, Corrado, & Dean, ). The models treat knowledge as an interaction of processing mechanisms and the structure of language experience. But language experience is often treated agnostically. We report a distributional semantic analysis that shows written language in fiction books varies appreciably between books from the different genres, books from the same genre, and even books written by the same author. Given that current theories assume that word knowledge reflects an interaction between processing mechanisms and the language environment, the analysis shows the need for the field to engage in a more deliberate consideration and curation of the corpora used in computational studies of natural language processing. Copyright © 2018 Cognitive Science Society, Inc.

  9. Large scale scenario analysis of future low carbon energy options

    International Nuclear Information System (INIS)

    Olaleye, Olaitan; Baker, Erin

    2015-01-01

    In this study, we use a multi-model framework to examine a set of possible future energy scenarios resulting from R&D investments in Solar, Nuclear, Carbon Capture and Storage (CCS), Bio-fuels, Bio-electricity, and Batteries for Electric Transportation. Based on a global scenario analysis, we examine the impact on the economy of advancement in energy technologies, considering both individual technologies and the interactions between pairs of technologies, with a focus on the role of uncertainty. Nuclear and CCS have the most impact on abatement costs, with CCS mostly important at high levels of abatement. We show that CCS and Bio-electricity are complements, while most of the other energy technology pairs are substitutes. We also examine for stochastic dominance between R&D portfolios: given the uncertainty in R&D outcomes, we examine which portfolios would be preferred by all decision-makers, regardless of their attitude toward risk. We observe that portfolios with CCS tend to stochastically dominate those without CCS; and portfolios lacking CCS and Nuclear tend to be stochastically dominated by others. We find that the dominance of CCS becomes even stronger as uncertainty in climate damages increases. Finally, we show that there is significant value in carefully choosing a portfolio, as relatively small portfolios can dominate large portfolios. - Highlights: • We examine future energy scenarios in the face of R&D and climate uncertainty. • We examine the impact of advancement in energy technologies and pairs of technologies. • CCS complements Bio-electricity while most technology pairs are substitutes. • R&D portfolios without CCS are stochastically dominated by portfolios with CCS. • Higher damage uncertainty favors R&D development of CCS and Bio-electricity

  10. Secondary Analysis of Large-Scale Assessment Data: An Alternative to Variable-Centred Analysis

    Science.gov (United States)

    Chow, Kui Foon; Kennedy, Kerry John

    2014-01-01

    International large-scale assessments are now part of the educational landscape in many countries and often feed into major policy decisions. Yet, such assessments also provide data sets for secondary analysis that can address key issues of concern to educators and policymakers alike. Traditionally, such secondary analyses have been based on a…

  11. Multielement neutron-activation analysis of plants and fertilizers

    International Nuclear Information System (INIS)

    Srapenyants, R.A.; Saveliev, I.B.

    1977-01-01

    The development of an automated technique for simultaneous multielement activation analysis of plants and fertilizers for the macronutrient elements N, P, K, Ca, Mg, Cl, and Si is presented. The developed universal NAA is based on the installation manufactured and supplied by Sames, France. The components of the automatic installation for neutron activation analysis are: neutron generator; a pneumatic transfer system; a scintillation crystal detector; a spectrometer rack including a basic multichannel analyser; a control panel for the neutron generator and pneumatic transfer system; a computer and teletype. On the basis of analytical procedures, algorithms and software, the first automatic (computer based) installation for multielement analyses of plants and fertilizers has been completed and is in routine use in the agrochemical and plant breeding research program in the Soviet Union. The proposed technique together with the full automatic real-time process of measurement and processing of data by computer, provides a throughput of 250-500 samples (1250-2500 elements determinations) per 8-hour shift, with the accuracy of +-3%; for N and +-5%; for P, K, Mg, Cl and +-15% for Ca. (T.G.)

  12. Multielement analysis of Nigerian traditional (black) soaps

    International Nuclear Information System (INIS)

    Akanni, M.S.; Ogugbuaja, V.O.

    1985-01-01

    The element contents of some Nigerian traditional soap samples were determined using thermal neutron activation analysis. The quality control consists of replicate analyses of standard 1632A bituminous coal for precision and accuracy determination. Potassium is found to be the major element in the soaps. While some elements show fairly constant concentration in all samples analyzed, others have high maximum/minimum ratios. The elemental concentration variation in the soaps may likely have effects on their relative foaming capability and such variation is linked to the physical environment where the starting materials are obtained. (author)

  13. Multielement analysis of water in Yodo River

    International Nuclear Information System (INIS)

    Mamuro, Tetsuo; Mizohata, Akira; Matsunami, Tadao; Matsuda, Yatsuka

    1980-01-01

    Yodo River is a major source of water supplies in the Osaka district. Three tributaries including Katsura River flow into this river at close positions. It is known that the Katsura River is considerably polluted due to the sewage treatment in Kyoto City. Following the previous survey in September, 1970, a similar survey by neutron activation has been carried out on the pollution of the Yodo River in October, 1977, by increasing the number of sampling points. Because it is reported that the pollution of the Katsura River has been largely lowered from that in the previous survey, the purpose was to grasp the present situation of the water pollution of the Yodo River due to metal elemens and others, and further to examine in relation of material balance. The procedures used were, first, the evaporation and solidification of sample water, and then neutron activation analysis. The correlation among the concentrations of elements, the pattern of the concentrations of elements, the material balance along the Yodo River, etc. are described in this paper. (J.P.N.)

  14. Analysis for preliminary evaluation of discrete fracture flow and large-scale permeability in sedimentary rocks

    International Nuclear Information System (INIS)

    Kanehiro, B.Y.; Lai, C.H.; Stow, S.H.

    1987-05-01

    Conceptual models for sedimentary rock settings that could be used in future evaluation and suitability studies are being examined through the DOE Repository Technology Program. One area of concern for the hydrologic aspects of these models is discrete fracture flow analysis as related to the estimation of the size of the representative elementary volume, evaluation of the appropriateness of continuum assumptions and estimation of the large-scale permeabilities of sedimentary rocks. A basis for preliminary analysis of flow in fracture systems of the types that might be expected to occur in low permeability sedimentary rocks is presented. The approach used involves numerical modeling of discrete fracture flow for the configuration of a large-scale hydrologic field test directed at estimation of the size of the representative elementary volume and large-scale permeability. Analysis of fracture data on the basis of this configuration is expected to provide a preliminary indication of the scale at which continuum assumptions can be made

  15. Analysis of environmental impact assessment for large-scale X-ray medical equipments

    International Nuclear Information System (INIS)

    Fu Jin; Pei Chengkai

    2011-01-01

    Based on an Environmental Impact Assessment (EIA) project, this paper elaborates the basic analysis essentials of EIA for the sales project of large-scale X-ray medical equipment, and provides the analysis procedure of environmental impact and dose estimation method under normal and accident conditions. The key points of EIA for the sales project of large-scale X-ray medical equipment include the determination of pollution factor and management limit value according to the project's actual situation, the utilization of various methods of assessment and prediction such as analogy, actual measurement and calculation to analyze, monitor, calculate and predict the pollution during normal and accident condition. (authors)

  16. A method of orbital analysis for large-scale first-principles simulations

    International Nuclear Information System (INIS)

    Ohwaki, Tsukuru; Otani, Minoru; Ozaki, Taisuke

    2014-01-01

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF 4 )

  17. Explore the Usefulness of Person-Fit Analysis on Large-Scale Assessment

    Science.gov (United States)

    Cui, Ying; Mousavi, Amin

    2015-01-01

    The current study applied the person-fit statistic, l[subscript z], to data from a Canadian provincial achievement test to explore the usefulness of conducting person-fit analysis on large-scale assessments. Item parameter estimates were compared before and after the misfitting student responses, as identified by l[subscript z], were removed. The…

  18. HiQuant: Rapid Postquantification Analysis of Large-Scale MS-Generated Proteomics Data.

    Science.gov (United States)

    Bryan, Kenneth; Jarboui, Mohamed-Ali; Raso, Cinzia; Bernal-Llinares, Manuel; McCann, Brendan; Rauch, Jens; Boldt, Karsten; Lynn, David J

    2016-06-03

    Recent advances in mass-spectrometry-based proteomics are now facilitating ambitious large-scale investigations of the spatial and temporal dynamics of the proteome; however, the increasing size and complexity of these data sets is overwhelming current downstream computational methods, specifically those that support the postquantification analysis pipeline. Here we present HiQuant, a novel application that enables the design and execution of a postquantification workflow, including common data-processing steps, such as assay normalization and grouping, and experimental replicate quality control and statistical analysis. HiQuant also enables the interpretation of results generated from large-scale data sets by supporting interactive heatmap analysis and also the direct export to Cytoscape and Gephi, two leading network analysis platforms. HiQuant may be run via a user-friendly graphical interface and also supports complete one-touch automation via a command-line mode. We evaluate HiQuant's performance by analyzing a large-scale, complex interactome mapping data set and demonstrate a 200-fold improvement in the execution time over current methods. We also demonstrate HiQuant's general utility by analyzing proteome-wide quantification data generated from both a large-scale public tyrosine kinase siRNA knock-down study and an in-house investigation into the temporal dynamics of the KSR1 and KSR2 interactomes. Download HiQuant, sample data sets, and supporting documentation at http://hiquant.primesdb.eu .

  19. Advanced Connectivity Analysis (ACA): a Large Scale Functional Connectivity Data Mining Environment.

    Science.gov (United States)

    Chen, Rong; Nixon, Erika; Herskovits, Edward

    2016-04-01

    Using resting-state functional magnetic resonance imaging (rs-fMRI) to study functional connectivity is of great importance to understand normal development and function as well as a host of neurological and psychiatric disorders. Seed-based analysis is one of the most widely used rs-fMRI analysis methods. Here we describe a freely available large scale functional connectivity data mining software package called Advanced Connectivity Analysis (ACA). ACA enables large-scale seed-based analysis and brain-behavior analysis. It can seamlessly examine a large number of seed regions with minimal user input. ACA has a brain-behavior analysis component to delineate associations among imaging biomarkers and one or more behavioral variables. We demonstrate applications of ACA to rs-fMRI data sets from a study of autism.

  20. Malware Analysis: From Large-Scale Data Triage to Targeted Attack Recognition (Dagstuhl Seminar 17281)

    OpenAIRE

    Zennou, Sarah; Debray, Saumya K.; Dullien, Thomas; Lakhothia, Arun

    2018-01-01

    This report summarizes the program and the outcomes of the Dagstuhl Seminar 17281, entitled "Malware Analysis: From Large-Scale Data Triage to Targeted Attack Recognition". The seminar brought together practitioners and researchers from industry and academia to discuss the state-of-the art in the analysis of malware from both a big data perspective and a fine grained analysis. Obfuscation was also considered. The meeting created new links within this very diverse community.

  1. A Combined Ethical and Scientific Analysis of Large-scale Tests of Solar Climate Engineering

    Science.gov (United States)

    Ackerman, T. P.

    2017-12-01

    Our research group recently published an analysis of the combined ethical and scientific issues surrounding large-scale testing of stratospheric aerosol injection (SAI; Lenferna et al., 2017, Earth's Future). We are expanding this study in two directions. The first is extending this same analysis to other geoengineering techniques, particularly marine cloud brightening (MCB). MCB has substantial differences to SAI in this context because MCB can be tested over significantly smaller areas of the planet and, following injection, has a much shorter lifetime of weeks as opposed to years for SAI. We examine issues such as the role of intent, the lesser of two evils, and the nature of consent. In addition, several groups are currently considering climate engineering governance tools such as a code of ethics and a registry. We examine how these tools might influence climate engineering research programs and, specifically, large-scale testing. The second direction of expansion is asking whether ethical and scientific issues associated with large-scale testing are so significant that they effectively preclude moving ahead with climate engineering research and testing. Some previous authors have suggested that no research should take place until these issues are resolved. We think this position is too draconian and consider a more nuanced version of this argument. We note, however, that there are serious questions regarding the ability of the scientific research community to move to the point of carrying out large-scale tests.

  2. The multielement potential of fast neutron cyclic activation analysis

    International Nuclear Information System (INIS)

    Nonie, S.E.; Randle, K.

    1994-01-01

    Cyclic neutron activation analysis (CNAA) has, in recent years been developed as a useful analytical tool for the assay of short-lived isotopes in single element situations. The work described in this paper investigates the potential of the technique for composite samples having a wide range of elements that produce short-lived and long-lived isotopes on neutron irradiation. Accelerator-derived neutrons with average energies of 3 MeV, 6 MeV and 14 MeV were employed in what has been dubbed 'Fast Neutron Cyclic Neutron Activation Analysis' (FNCAA). The approach to multi-element analysis entailed: determination of cycle parameters in single element samples via the reactions 27 Al(n,p) 27 Mg(9.6 min,E γ =840keV), and 137 Ba(n,n 'γ137m Ba(2.3 min,E γ 137m Ba(2.3 min,E γ =662 keV), a test of the method on a composite rock sample, determination of analytical sensitivities using both powdered kale and rock standards and a comparison of analytical results with other techniques. The results obtained in all these measurements are presented and discussed. (author) 10 refs.; 3 figs.; 5 tabs

  3. Large-Scale Analysis of Framework-Specific Exceptions in Android Apps

    OpenAIRE

    Fan, Lingling; Su, Ting; Chen, Sen; Meng, Guozhu; Liu, Yang; Xu, Lihua; Pu, Geguang; Su, Zhendong

    2018-01-01

    Mobile apps have become ubiquitous. For app developers, it is a key priority to ensure their apps' correctness and reliability. However, many apps still suffer from occasional to frequent crashes, weakening their competitive edge. Large-scale, deep analyses of the characteristics of real-world app crashes can provide useful insights to guide developers, or help improve testing and analysis tools. However, such studies do not exist -- this paper fills this gap. Over a four-month long effort, w...

  4. Quantification of multielement-multilayer-samples in electron probe analysis

    International Nuclear Information System (INIS)

    Pfeiffer, A.

    1995-03-01

    The following dissertation presents the theoretical basis of analytical correction models and Monte Carlo simulations in the field of electron probe microanalysis to describe the excitation conditions of x-rays in a multilayer-multielement-sample. In this connection analyzing programs have been developed to make a quantitative investigation of heterogeneous samples possible. In the work the mathematical methods and formulas, which are mainly based on empirical and semiempirical findings, are described and their validity is discussed in detail. Especially the improvements of the 'multiple reflections'-model by August are compared with the Φ(ρz)-models by Pouchou, Merlet and Bastin. The calculations of depth distribution functions for characteristics and continuous fluorescence excitation result in a consistent and completeΦ(ρz)-model. This allows to analyze layered structures in great detail. Because of the increasing importance in electron probe microanalysis and as a reference method a Monte Carlo model is described. With this model electron trajectories and excitation conditions in arbitrary two dimensional geometries can be calculated. The validity of the analytical model is proven with a comprehensive comparison of results of new calculations to published data. To show an application of the programs and models in routine use in the industrial research and development, a quantitative analysis of a Co/Si system is made. In the conclusion of this dissertation some reflections upon investigations, which are based on this work and which should be made in future are outlined. (author)

  5. Multielement analysis of water in the Yodo River

    International Nuclear Information System (INIS)

    Mamuro, Tetsuo; Mizohata, Akira; Matsunami, Tadao; Matsuda, Yatsuka

    1979-01-01

    In 1970 we made multielement analysis of water samples collected at various points in the Yodo River, which is the main source of tap water supply in Osaka district, in order to know the extent of pollution especially by metallic elements. The analytical results were discussed from the standpoint of the material balance. In 1977 we again made a similar survey; the number of sampling points was increased. It was revealed that the pollution pattern was quite similar to that found formerly, but concentrations of the elements originating mainly from human activities somewhat decreased. The material balance was discussed in greater detail. It was attempted to explain the change of the elemental concentrations along the stream, taking into consideration the flow-in from small brooks and the take-out by water purification plants. In the down stream where the flow speed is very low, the concentrations of the elements originating mainly from soil was considerably low possibly due to the precipitation of particulates, and the concentrations of the soluble elements originating from human activities was also somewhat low possibly because of the take-out of relatively more polluted water by purification plants. (author)

  6. Large-scale data analysis of power grid resilience across multiple US service regions

    Science.gov (United States)

    Ji, Chuanyi; Wei, Yun; Mei, Henry; Calzada, Jorge; Carey, Matthew; Church, Steve; Hayes, Timothy; Nugent, Brian; Stella, Gregory; Wallace, Matthew; White, Joe; Wilcox, Robert

    2016-05-01

    Severe weather events frequently result in large-scale power failures, affecting millions of people for extended durations. However, the lack of comprehensive, detailed failure and recovery data has impeded large-scale resilience studies. Here, we analyse data from four major service regions representing Upstate New York during Super Storm Sandy and daily operations. Using non-stationary spatiotemporal random processes that relate infrastructural failures to recoveries and cost, our data analysis shows that local power failures have a disproportionally large non-local impact on people (that is, the top 20% of failures interrupted 84% of services to customers). A large number (89%) of small failures, represented by the bottom 34% of customers and commonplace devices, resulted in 56% of the total cost of 28 million customer interruption hours. Our study shows that extreme weather does not cause, but rather exacerbates, existing vulnerabilities, which are obscured in daily operations.

  7. A Matter of Time: Faster Percolator Analysis via Efficient SVM Learning for Large-Scale Proteomics.

    Science.gov (United States)

    Halloran, John T; Rocke, David M

    2018-05-04

    Percolator is an important tool for greatly improving the results of a database search and subsequent downstream analysis. Using support vector machines (SVMs), Percolator recalibrates peptide-spectrum matches based on the learned decision boundary between targets and decoys. To improve analysis time for large-scale data sets, we update Percolator's SVM learning engine through software and algorithmic optimizations rather than heuristic approaches that necessitate the careful study of their impact on learned parameters across different search settings and data sets. We show that by optimizing Percolator's original learning algorithm, l 2 -SVM-MFN, large-scale SVM learning requires nearly only a third of the original runtime. Furthermore, we show that by employing the widely used Trust Region Newton (TRON) algorithm instead of l 2 -SVM-MFN, large-scale Percolator SVM learning is reduced to nearly only a fifth of the original runtime. Importantly, these speedups only affect the speed at which Percolator converges to a global solution and do not alter recalibration performance. The upgraded versions of both l 2 -SVM-MFN and TRON are optimized within the Percolator codebase for multithreaded and single-thread use and are available under Apache license at bitbucket.org/jthalloran/percolator_upgrade .

  8. A method of orbital analysis for large-scale first-principles simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ohwaki, Tsukuru [Advanced Materials Laboratory, Nissan Research Center, Nissan Motor Co., Ltd., 1 Natsushima-cho, Yokosuka, Kanagawa 237-8523 (Japan); Otani, Minoru [Nanosystem Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki 305-8568 (Japan); Ozaki, Taisuke [Research Center for Simulation Science (RCSS), Japan Advanced Institute of Science and Technology (JAIST), 1-1 Asahidai, Nomi, Ishikawa 923-1292 (Japan)

    2014-06-28

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF{sub 4})

  9. Analysis of Utilization of Fecal Resources in Large-scale Livestock and Poultry Breeding in China

    Directory of Open Access Journals (Sweden)

    XUAN Meng

    2018-02-01

    Full Text Available The purpose of this paper is to develop a systematic investigation for the serious problems of livestock and poultry breeding in China and the technical demand of promoting the utilization of manure. Based on the status quo of large-scale livestock and poultry farming in typical areas in China, the work had been done beared on statistics and analysis of the modes and proportions of utilization of manure resources. Such a statistical method had been applied to the country -identified large -scale farm, which the total amount of pollutants reduction was in accordance with the "12th Five-Year Plan" standards. The results showed that there were some differences in the modes of resource utilization due to livestock and poultry manure at different scales and types:(1 Hogs, dairy cattle and beef cattle in total accounted for more than 75% of the agricultural manure storage;(2 Laying hens and broiler chickens accounted for about 65% of the total production of the organic manure produced by fecal production. It is demonstrated that the major modes of resource utilization of dung and urine were related to the natural characteristics, agricultural production methods, farming scale and economic development level in the area. It was concluded that the unreasonable planning, lacking of cleansing during breeding, false selection of manure utilizing modes were the major problems in China忆s large-scale livestock and poultry fecal resources utilization.

  10. Analysis of the applicability of fracture mechanics on the basis of large scale specimen testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Polachova, H.; Sulc, J.; Anikovskij, V.; Dragunov, Y.; Rivkin, E.; Filatov, V.

    1988-01-01

    The verification is dealt with of fracture mechanics calculations for WWER reactor pressure vessels by large scale model testing performed on the large testing machine ZZ 8000 (maximum load of 80 MN) in the Skoda Concern. The results of testing a large set of large scale test specimens with surface crack-type defects are presented. The nominal thickness of the specimens was 150 mm with defect depths between 15 and 100 mm, the testing temperature varying between -30 and +80 degC (i.e., in the temperature interval of T ko ±50 degC). Specimens with a scale of 1:8 and 1:12 were also tested, as well as standard (CT and TPB) specimens. Comparisons of results of testing and calculations suggest some conservatism of calculations (especially for small defects) based on Linear Elastic Fracture Mechanics, according to the Nuclear Reactor Pressure Vessel Codes which use the fracture mechanics values from J IC testing. On the basis of large scale tests the ''Defect Analysis Diagram'' was constructed and recommended for brittle fracture assessment of reactor pressure vessels. (author). 7 figs., 2 tabs., 3 refs

  11. Large-scale derived flood frequency analysis based on continuous simulation

    Science.gov (United States)

    Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno

    2016-04-01

    drawbacks reported in traditional approaches for the derived flood frequency analysis and therefore is recommended for large scale flood risk case studies.

  12. Applications of Data Assimilation to Analysis of the Ocean on Large Scales

    Science.gov (United States)

    Miller, Robert N.; Busalacchi, Antonio J.; Hackert, Eric C.

    1997-01-01

    It is commonplace to begin talks on this topic by noting that oceanographic data are too scarce and sparse to provide complete initial and boundary conditions for large-scale ocean models. Even considering the availability of remotely-sensed data such as radar altimetry from the TOPEX and ERS-1 satellites, a glance at a map of available subsurface data should convince most observers that this is still the case. Data are still too sparse for comprehensive treatment of interannual to interdecadal climate change through the use of models, since the new data sets have not been around for very long. In view of the dearth of data, we must note that the overall picture is changing rapidly. Recently, there have been a number of large scale ocean analysis and prediction efforts, some of which now run on an operational or at least quasi-operational basis, most notably the model based analyses of the tropical oceans. These programs are modeled on numerical weather prediction. Aside from the success of the global tide models, assimilation of data in the tropics, in support of prediction and analysis of seasonal to interannual climate change, is probably the area of large scale ocean modeling and data assimilation in which the most progress has been made. Climate change is a problem which is particularly suited to advanced data assimilation methods. Linear models are useful, and the linear theory can be exploited. For the most part, the data are sufficiently sparse that implementation of advanced methods is worthwhile. As an example of a large scale data assimilation experiment with a recent extensive data set, we present results of a tropical ocean experiment in which the Kalman filter was used to assimilate three years of altimetric data from Geosat into a coarsely resolved linearized long wave shallow water model. Since nonlinear processes dominate the local dynamic signal outside the tropics, subsurface dynamical quantities cannot be reliably inferred from surface height

  13. Preparing laboratory and real-world EEG data for large-scale analysis: A containerized approach

    Directory of Open Access Journals (Sweden)

    Nima eBigdely-Shamlo

    2016-03-01

    Full Text Available Large-scale analysis of EEG and other physiological measures promises new insights into brain processes and more accurate and robust brain-computer interface (BCI models.. However, the absence of standard-ized vocabularies for annotating events in a machine understandable manner, the welter of collection-specific data organizations, the diffi-culty in moving data across processing platforms, and the unavailability of agreed-upon standards for preprocessing have prevented large-scale analyses of EEG. Here we describe a containerized approach and freely available tools we have developed to facilitate the process of an-notating, packaging, and preprocessing EEG data collections to enable data sharing, archiving, large-scale machine learning/data mining and (meta-analysis. The EEG Study Schema (ESS comprises three data Levels, each with its own XML-document schema and file/folder convention, plus a standardized (PREP pipeline to move raw (Data Level 1 data to a basic preprocessed state (Data Level 2 suitable for application of a large class of EEG analysis methods. Researchers can ship a study as a single unit and operate on its data using a standardized interface. ESS does not require a central database and provides all the metadata data necessary to execute a wide variety of EEG processing pipelines. The primary focus of ESS is automated in-depth analysis and meta-analysis EEG studies. However, ESS can also encapsulate meta-information for the other modalities such as eye tracking, that are in-creasingly used in both laboratory and real-world neuroimaging. ESS schema and tools are freely available at eegstudy.org, and a central cata-log of over 850 GB of existing data in ESS format is available at study-catalog.org. These tools and resources are part of a larger effort to ena-ble data sharing at sufficient scale for researchers to engage in truly large-scale EEG analysis and data mining (BigEEG.org.

  14. Environmental performance evaluation of large-scale municipal solid waste incinerators using data envelopment analysis

    International Nuclear Information System (INIS)

    Chen, H.-W.; Chang, N.-B.; Chen, J.-C.; Tsai, S.-J.

    2010-01-01

    Limited to insufficient land resources, incinerators are considered in many countries such as Japan and Germany as the major technology for a waste management scheme capable of dealing with the increasing demand for municipal and industrial solid waste treatment in urban regions. The evaluation of these municipal incinerators in terms of secondary pollution potential, cost-effectiveness, and operational efficiency has become a new focus in the highly interdisciplinary area of production economics, systems analysis, and waste management. This paper aims to demonstrate the application of data envelopment analysis (DEA) - a production economics tool - to evaluate performance-based efficiencies of 19 large-scale municipal incinerators in Taiwan with different operational conditions. A 4-year operational data set from 2002 to 2005 was collected in support of DEA modeling using Monte Carlo simulation to outline the possibility distributions of operational efficiency of these incinerators. Uncertainty analysis using the Monte Carlo simulation provides a balance between simplifications of our analysis and the soundness of capturing the essential random features that complicate solid waste management systems. To cope with future challenges, efforts in the DEA modeling, systems analysis, and prediction of the performance of large-scale municipal solid waste incinerators under normal operation and special conditions were directed toward generating a compromised assessment procedure. Our research findings will eventually lead to the identification of the optimal management strategies for promoting the quality of solid waste incineration, not only in Taiwan, but also elsewhere in the world.

  15. Environmental performance evaluation of large-scale municipal solid waste incinerators using data envelopment analysis.

    Science.gov (United States)

    Chen, Ho-Wen; Chang, Ni-Bin; Chen, Jeng-Chung; Tsai, Shu-Ju

    2010-07-01

    Limited to insufficient land resources, incinerators are considered in many countries such as Japan and Germany as the major technology for a waste management scheme capable of dealing with the increasing demand for municipal and industrial solid waste treatment in urban regions. The evaluation of these municipal incinerators in terms of secondary pollution potential, cost-effectiveness, and operational efficiency has become a new focus in the highly interdisciplinary area of production economics, systems analysis, and waste management. This paper aims to demonstrate the application of data envelopment analysis (DEA)--a production economics tool--to evaluate performance-based efficiencies of 19 large-scale municipal incinerators in Taiwan with different operational conditions. A 4-year operational data set from 2002 to 2005 was collected in support of DEA modeling using Monte Carlo simulation to outline the possibility distributions of operational efficiency of these incinerators. Uncertainty analysis using the Monte Carlo simulation provides a balance between simplifications of our analysis and the soundness of capturing the essential random features that complicate solid waste management systems. To cope with future challenges, efforts in the DEA modeling, systems analysis, and prediction of the performance of large-scale municipal solid waste incinerators under normal operation and special conditions were directed toward generating a compromised assessment procedure. Our research findings will eventually lead to the identification of the optimal management strategies for promoting the quality of solid waste incineration, not only in Taiwan, but also elsewhere in the world. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  16. Unified Tractable Model for Large-Scale Networks Using Stochastic Geometry: Analysis and Design

    KAUST Repository

    Afify, Laila H.

    2016-12-01

    The ever-growing demands for wireless technologies necessitate the evolution of next generation wireless networks that fulfill the diverse wireless users requirements. However, upscaling existing wireless networks implies upscaling an intrinsic component in the wireless domain; the aggregate network interference. Being the main performance limiting factor, it becomes crucial to develop a rigorous analytical framework to accurately characterize the out-of-cell interference, to reap the benefits of emerging networks. Due to the different network setups and key performance indicators, it is essential to conduct a comprehensive study that unifies the various network configurations together with the different tangible performance metrics. In that regard, the focus of this thesis is to present a unified mathematical paradigm, based on Stochastic Geometry, for large-scale networks with different antenna/network configurations. By exploiting such a unified study, we propose an efficient automated network design strategy to satisfy the desired network objectives. First, this thesis studies the exact aggregate network interference characterization, by accounting for each of the interferers signals in the large-scale network. Second, we show that the information about the interferers symbols can be approximated via the Gaussian signaling approach. The developed mathematical model presents twofold analysis unification for uplink and downlink cellular networks literature. It aligns the tangible decoding error probability analysis with the abstract outage probability and ergodic rate analysis. Furthermore, it unifies the analysis for different antenna configurations, i.e., various multiple-input multiple-output (MIMO) systems. Accordingly, we propose a novel reliable network design strategy that is capable of appropriately adjusting the network parameters to meet desired design criteria. In addition, we discuss the diversity-multiplexing tradeoffs imposed by differently favored

  17. Inference of functional properties from large-scale analysis of enzyme superfamilies.

    Science.gov (United States)

    Brown, Shoshana D; Babbitt, Patricia C

    2012-01-02

    As increasingly large amounts of data from genome and other sequencing projects become available, new approaches are needed to determine the functions of the proteins these genes encode. We show how large-scale computational analysis can help to address this challenge by linking functional information to sequence and structural similarities using protein similarity networks. Network analyses using three functionally diverse enzyme superfamilies illustrate the use of these approaches for facile updating and comparison of available structures for a large superfamily, for creation of functional hypotheses for metagenomic sequences, and to summarize the limits of our functional knowledge about even well studied superfamilies.

  18. Inference of Functional Properties from Large-scale Analysis of Enzyme Superfamilies*

    Science.gov (United States)

    Brown, Shoshana D.; Babbitt, Patricia C.

    2012-01-01

    As increasingly large amounts of data from genome and other sequencing projects become available, new approaches are needed to determine the functions of the proteins these genes encode. We show how large-scale computational analysis can help to address this challenge by linking functional information to sequence and structural similarities using protein similarity networks. Network analyses using three functionally diverse enzyme superfamilies illustrate the use of these approaches for facile updating and comparison of available structures for a large superfamily, for creation of functional hypotheses for metagenomic sequences, and to summarize the limits of our functional knowledge about even well studied superfamilies. PMID:22069325

  19. Development of a large-scale general purpose two-phase flow analysis code

    International Nuclear Information System (INIS)

    Terasaka, Haruo; Shimizu, Sensuke

    2001-01-01

    A general purpose three-dimensional two-phase flow analysis code has been developed for solving large-scale problems in industrial fields. The code uses a two-fluid model to describe the conservation equations for two-phase flow in order to be applicable to various phenomena. Complicated geometrical conditions are modeled by FAVOR method in structured grid systems, and the discretization equations are solved by a modified SIMPLEST scheme. To reduce computing time a matrix solver for the pressure correction equation is parallelized with OpenMP. Results of numerical examples show that the accurate solutions can be obtained efficiently and stably. (author)

  20. Timing of Formal Phase Safety Reviews for Large-Scale Integrated Hazard Analysis

    Science.gov (United States)

    Massie, Michael J.; Morris, A. Terry

    2010-01-01

    Integrated hazard analysis (IHA) is a process used to identify and control unacceptable risk. As such, it does not occur in a vacuum. IHA approaches must be tailored to fit the system being analyzed. Physical, resource, organizational and temporal constraints on large-scale integrated systems impose additional direct or derived requirements on the IHA. The timing and interaction between engineering and safety organizations can provide either benefits or hindrances to the overall end product. The traditional approach for formal phase safety review timing and content, which generally works well for small- to moderate-scale systems, does not work well for very large-scale integrated systems. This paper proposes a modified approach to timing and content of formal phase safety reviews for IHA. Details of the tailoring process for IHA will describe how to avoid temporary disconnects in major milestone reviews and how to maintain a cohesive end-to-end integration story particularly for systems where the integrator inherently has little to no insight into lower level systems. The proposal has the advantage of allowing the hazard analysis development process to occur as technical data normally matures.

  1. Safety Effect Analysis of the Large-Scale Design Changes in a Nuclear Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eun-Chan; Lee, Hyun-Gyo [Korea Hydro and Nuclear Power Co. Ltd., Daejeon (Korea, Republic of)

    2015-05-15

    These activities were predominantly focused on replacing obsolete systems with new systems, and these efforts were not only to prolong the plant life, but also to guarantee the safe operation of the units. This review demonstrates the safety effect evaluation using the probabilistic safety assessment (PSA) of the design changes, system improvements, and Fukushima accident action items for Kori unit 1 (K1). For the large scale of system design changes for K1, the safety effects from the PSA perspective were reviewed using the risk quantification results before and after the system improvements. This evaluation considered the seven significant design changes including the replacement of the control building air conditioning system and the performance improvement of the containment sump using a new filtering system as well as above five system design changes. The analysis results demonstrated that the CDF was reduced by 12% overall from 1.62E-5/y to 1.43E-5/y. The CDF reduction was larger in the transient group than in the loss of coolant accident (LOCA) group. In conclusion, the analysis using the K1 PSA model supports that the plant safety has been appropriately maintained after the large-scale design changes in consideration of the changed operation factors and failure modes due to the system improvements.

  2. Rainbow: a tool for large-scale whole-genome sequencing data analysis using cloud computing.

    Science.gov (United States)

    Zhao, Shanrong; Prenger, Kurt; Smith, Lance; Messina, Thomas; Fan, Hongtao; Jaeger, Edward; Stephens, Susan

    2013-06-27

    Technical improvements have decreased sequencing costs and, as a result, the size and number of genomic datasets have increased rapidly. Because of the lower cost, large amounts of sequence data are now being produced by small to midsize research groups. Crossbow is a software tool that can detect single nucleotide polymorphisms (SNPs) in whole-genome sequencing (WGS) data from a single subject; however, Crossbow has a number of limitations when applied to multiple subjects from large-scale WGS projects. The data storage and CPU resources that are required for large-scale whole genome sequencing data analyses are too large for many core facilities and individual laboratories to provide. To help meet these challenges, we have developed Rainbow, a cloud-based software package that can assist in the automation of large-scale WGS data analyses. Here, we evaluated the performance of Rainbow by analyzing 44 different whole-genome-sequenced subjects. Rainbow has the capacity to process genomic data from more than 500 subjects in two weeks using cloud computing provided by the Amazon Web Service. The time includes the import and export of the data using Amazon Import/Export service. The average cost of processing a single sample in the cloud was less than 120 US dollars. Compared with Crossbow, the main improvements incorporated into Rainbow include the ability: (1) to handle BAM as well as FASTQ input files; (2) to split large sequence files for better load balance downstream; (3) to log the running metrics in data processing and monitoring multiple Amazon Elastic Compute Cloud (EC2) instances; and (4) to merge SOAPsnp outputs for multiple individuals into a single file to facilitate downstream genome-wide association studies. Rainbow is a scalable, cost-effective, and open-source tool for large-scale WGS data analysis. For human WGS data sequenced by either the Illumina HiSeq 2000 or HiSeq 2500 platforms, Rainbow can be used straight out of the box. Rainbow is available

  3. Oligopolistic competition in wholesale electricity markets: Large-scale simulation and policy analysis using complementarity models

    Science.gov (United States)

    Helman, E. Udi

    a DC load flow approximation). Chapter 9 shows the price results. In contrast to prior market power simulations of these markets, much greater variability in price-cost margins is found when using a realistic model of hourly conditions on such a large network. Chapter 10 shows that the conventional concentration indices (HHIs) are poorly correlated with PCMs. Finally, Chapter 11 proposes that the simulation models are applied to merger analysis and provides two large-scale merger examples. (Abstract shortened by UMI.)

  4. Multi-element analysis for environmental characterization and its future trends

    International Nuclear Information System (INIS)

    Sansoni, B.

    1987-04-01

    Before starting to characterize the environment by its elemental composition, it may be useful to ask about the objective of these efforts. This includes questions about the scope of environmental protection, the definition of the environment and the limitations of its characterization by elemental composition alone. In the second part of this lecture, examples are given of the elemental composition of well analysed samples from the atmosphere, hydrosphere, lithosphere and biosphere. The third part introduces the principle of multi-element analysis and the fourth part gives examples. Finally, future aspects of modern chemical analysis are outlined with respect to the multi-element principle. (orig.)

  5. Demonstration of Mobile Auto-GPS for Large Scale Human Mobility Analysis

    Science.gov (United States)

    Horanont, Teerayut; Witayangkurn, Apichon; Shibasaki, Ryosuke

    2013-04-01

    The greater affordability of digital devices and advancement of positioning and tracking capabilities have presided over today's age of geospatial Big Data. Besides, the emergences of massive mobile location data and rapidly increase in computational capabilities open up new opportunities for modeling of large-scale urban dynamics. In this research, we demonstrate the new type of mobile location data called "Auto-GPS" and its potential use cases for urban applications. More than one million Auto-GPS mobile phone users in Japan have been observed nationwide in a completely anonymous form for over an entire year from August 2010 to July 2011 for this analysis. A spate of natural disasters and other emergencies during the past few years has prompted new interest in how mobile location data can help enhance our security, especially in urban areas which are highly vulnerable to these impacts. New insights gleaned from mining the Auto-GPS data suggest a number of promising directions of modeling human movement during a large-scale crisis. We question how people react under critical situation and how their movement changes during severe disasters. Our results demonstrate a case of major earthquake and explain how people who live in Tokyo Metropolitan and vicinity area behave and return home after the Great East Japan Earthquake on March 11, 2011.

  6. Large-scale gene function analysis with the PANTHER classification system.

    Science.gov (United States)

    Mi, Huaiyu; Muruganujan, Anushya; Casagrande, John T; Thomas, Paul D

    2013-08-01

    The PANTHER (protein annotation through evolutionary relationship) classification system (http://www.pantherdb.org/) is a comprehensive system that combines gene function, ontology, pathways and statistical analysis tools that enable biologists to analyze large-scale, genome-wide data from sequencing, proteomics or gene expression experiments. The system is built with 82 complete genomes organized into gene families and subfamilies, and their evolutionary relationships are captured in phylogenetic trees, multiple sequence alignments and statistical models (hidden Markov models or HMMs). Genes are classified according to their function in several different ways: families and subfamilies are annotated with ontology terms (Gene Ontology (GO) and PANTHER protein class), and sequences are assigned to PANTHER pathways. The PANTHER website includes a suite of tools that enable users to browse and query gene functions, and to analyze large-scale experimental data with a number of statistical tests. It is widely used by bench scientists, bioinformaticians, computer scientists and systems biologists. In the 2013 release of PANTHER (v.8.0), in addition to an update of the data content, we redesigned the website interface to improve both user experience and the system's analytical capability. This protocol provides a detailed description of how to analyze genome-wide experimental data with the PANTHER classification system.

  7. Watchdog - a workflow management system for the distributed analysis of large-scale experimental data.

    Science.gov (United States)

    Kluge, Michael; Friedel, Caroline C

    2018-03-13

    The development of high-throughput experimental technologies, such as next-generation sequencing, have led to new challenges for handling, analyzing and integrating the resulting large and diverse datasets. Bioinformatical analysis of these data commonly requires a number of mutually dependent steps applied to numerous samples for multiple conditions and replicates. To support these analyses, a number of workflow management systems (WMSs) have been developed to allow automated execution of corresponding analysis workflows. Major advantages of WMSs are the easy reproducibility of results as well as the reusability of workflows or their components. In this article, we present Watchdog, a WMS for the automated analysis of large-scale experimental data. Main features include straightforward processing of replicate data, support for distributed computer systems, customizable error detection and manual intervention into workflow execution. Watchdog is implemented in Java and thus platform-independent and allows easy sharing of workflows and corresponding program modules. It provides a graphical user interface (GUI) for workflow construction using pre-defined modules as well as a helper script for creating new module definitions. Execution of workflows is possible using either the GUI or a command-line interface and a web-interface is provided for monitoring the execution status and intervening in case of errors. To illustrate its potentials on a real-life example, a comprehensive workflow and modules for the analysis of RNA-seq experiments were implemented and are provided with the software in addition to simple test examples. Watchdog is a powerful and flexible WMS for the analysis of large-scale high-throughput experiments. We believe it will greatly benefit both users with and without programming skills who want to develop and apply bioinformatical workflows with reasonable overhead. The software, example workflows and a comprehensive documentation are freely

  8. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab

  9. A numerical formulation and algorithm for limit and shakedown analysis of large-scale elastoplastic structures

    Science.gov (United States)

    Peng, Heng; Liu, Yinghua; Chen, Haofeng

    2018-05-01

    In this paper, a novel direct method called the stress compensation method (SCM) is proposed for limit and shakedown analysis of large-scale elastoplastic structures. Without needing to solve the specific mathematical programming problem, the SCM is a two-level iterative procedure based on a sequence of linear elastic finite element solutions where the global stiffness matrix is decomposed only once. In the inner loop, the static admissible residual stress field for shakedown analysis is constructed. In the outer loop, a series of decreasing load multipliers are updated to approach to the shakedown limit multiplier by using an efficient and robust iteration control technique, where the static shakedown theorem is adopted. Three numerical examples up to about 140,000 finite element nodes confirm the applicability and efficiency of this method for two-dimensional and three-dimensional elastoplastic structures, with detailed discussions on the convergence and the accuracy of the proposed algorithm.

  10. Large-scale analysis of phosphorylation site occupancy in eukaryotic proteins

    DEFF Research Database (Denmark)

    Rao, R Shyama Prasad; Møller, Ian Max

    2012-01-01

    in proteins is currently lacking. We have therefore analyzed the occurrence and occupancy of phosphorylated sites (~ 100,281) in a large set of eukaryotic proteins (~ 22,995). Phosphorylation probability was found to be much higher in both the  termini of protein sequences and this is much pronounced...... maximum randomness. An analysis of phosphorylation motifs indicated that just 40 motifs and a much lower number of associated kinases might account for nearly 50% of the known phosphorylations in eukaryotic proteins. Our results provide a broad picture of the phosphorylation sites in eukaryotic proteins.......Many recent high throughput technologies have enabled large-scale discoveries of new phosphorylation sites and phosphoproteins. Although they have provided a number of insights into protein phosphorylation and the related processes, an inclusive analysis on the nature of phosphorylated sites...

  11. Multi-element analysis of unidentified fallen objects from Tatale in ...

    African Journals Online (AJOL)

    A multi-element analysis has been carried out on two fallen objects, # 01 and # 02, using instrumental neutron activation analysis technique. A total of 17 elements were identified in object # 01 while 21 elements were found in object # 02. The two major elements in object # 01 were Fe and Mg, which together constitute ...

  12. Large-scale transcriptome analysis reveals arabidopsis metabolic pathways are frequently influenced by different pathogens.

    Science.gov (United States)

    Jiang, Zhenhong; He, Fei; Zhang, Ziding

    2017-07-01

    Through large-scale transcriptional data analyses, we highlighted the importance of plant metabolism in plant immunity and identified 26 metabolic pathways that were frequently influenced by the infection of 14 different pathogens. Reprogramming of plant metabolism is a common phenomenon in plant defense responses. Currently, a large number of transcriptional profiles of infected tissues in Arabidopsis (Arabidopsis thaliana) have been deposited in public databases, which provides a great opportunity to understand the expression patterns of metabolic pathways during plant defense responses at the systems level. Here, we performed a large-scale transcriptome analysis based on 135 previously published expression samples, including 14 different pathogens, to explore the expression pattern of Arabidopsis metabolic pathways. Overall, metabolic genes are significantly changed in expression during plant defense responses. Upregulated metabolic genes are enriched on defense responses, and downregulated genes are enriched on photosynthesis, fatty acid and lipid metabolic processes. Gene set enrichment analysis (GSEA) identifies 26 frequently differentially expressed metabolic pathways (FreDE_Paths) that are differentially expressed in more than 60% of infected samples. These pathways are involved in the generation of energy, fatty acid and lipid metabolism as well as secondary metabolite biosynthesis. Clustering analysis based on the expression levels of these 26 metabolic pathways clearly distinguishes infected and control samples, further suggesting the importance of these metabolic pathways in plant defense responses. By comparing with FreDE_Paths from abiotic stresses, we find that the expression patterns of 26 FreDE_Paths from biotic stresses are more consistent across different infected samples. By investigating the expression correlation between transcriptional factors (TFs) and FreDE_Paths, we identify several notable relationships. Collectively, the current study

  13. Large Scale Flood Risk Analysis using a New Hyper-resolution Population Dataset

    Science.gov (United States)

    Smith, A.; Neal, J. C.; Bates, P. D.; Quinn, N.; Wing, O.

    2017-12-01

    Here we present the first national scale flood risk analyses, using high resolution Facebook Connectivity Lab population data and data from a hyper resolution flood hazard model. In recent years the field of large scale hydraulic modelling has been transformed by new remotely sensed datasets, improved process representation, highly efficient flow algorithms and increases in computational power. These developments have allowed flood risk analysis to be undertaken in previously unmodeled territories and from continental to global scales. Flood risk analyses are typically conducted via the integration of modelled water depths with an exposure dataset. Over large scales and in data poor areas, these exposure data typically take the form of a gridded population dataset, estimating population density using remotely sensed data and/or locally available census data. The local nature of flooding dictates that for robust flood risk analysis to be undertaken both hazard and exposure data should sufficiently resolve local scale features. Global flood frameworks are enabling flood hazard data to produced at 90m resolution, resulting in a mis-match with available population datasets which are typically more coarsely resolved. Moreover, these exposure data are typically focused on urban areas and struggle to represent rural populations. In this study we integrate a new population dataset with a global flood hazard model. The population dataset was produced by the Connectivity Lab at Facebook, providing gridded population data at 5m resolution, representing a resolution increase over previous countrywide data sets of multiple orders of magnitude. Flood risk analysis undertaken over a number of developing countries are presented, along with a comparison of flood risk analyses undertaken using pre-existing population datasets.

  14. LARGE SCALE DISTRIBUTED PARAMETER MODEL OF MAIN MAGNET SYSTEM AND FREQUENCY DECOMPOSITION ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    ZHANG,W.; MARNERIS, I.; SANDBERG, J.

    2007-06-25

    Large accelerator main magnet system consists of hundreds, even thousands, of dipole magnets. They are linked together under selected configurations to provide highly uniform dipole fields when powered. Distributed capacitance, insulation resistance, coil resistance, magnet inductance, and coupling inductance of upper and lower pancakes make each magnet a complex network. When all dipole magnets are chained together in a circle, they become a coupled pair of very high order complex ladder networks. In this study, a network of more than thousand inductive, capacitive or resistive elements are used to model an actual system. The circuit is a large-scale network. Its equivalent polynomial form has several hundred degrees. Analysis of this high order circuit and simulation of the response of any or all components is often computationally infeasible. We present methods to use frequency decomposition approach to effectively simulate and analyze magnet configuration and power supply topologies.

  15. Thermal Stress FE Analysis of Large-scale Gas Holder Under Sunshine Temperature Field

    Science.gov (United States)

    Li, Jingyu; Yang, Ranxia; Wang, Hehui

    2018-03-01

    The temperature field and thermal stress of Man type gas holder is simulated by using the theory of sunshine temperature field based on ASHRAE clear-sky model and the finite element method. The distribution of surface temperature and thermal stress of gas holder under the given sunshine condition is obtained. The results show that the thermal stress caused by sunshine can be identified as one of the important factors for the failure of local cracked oil leakage which happens on the sunny side before on the shady side. Therefore, it is of great importance to consider the sunshine thermal load in the stress analysis, design and operation of large-scale steel structures such as the gas holder.

  16. Status of large-scale analysis of post-translational modifications by mass spectrometry

    DEFF Research Database (Denmark)

    Olsen, Jesper V; Mann, Matthias

    2013-01-01

    Cellular function can be controlled through the gene expression program but often protein post translations modifications (PTMs) provide a more precisely and elegant mechanism. Key functional roles of specific modification events for instance during the cell cycle have been known for decades...... of protein modifications. For many PTMs, including phosphorylation, ubiquitination, glycosylation and acetylation, tens of thousands of sites can now be confidently identified and localized in the sequence of the protein. Quantitation of PTM levels between different cellular states is likewise established......, with label-free methods showing particular promise. It is also becoming possible to determine the absolute occupancy or stoichiometry of PTMS sites on a large scale. Powerful software for the bioinformatic analysis of thousands of PTM sites has been developed. However, a complete inventory of sites has...

  17. Progress in Root Cause and Fault Propagation Analysis of Large-Scale Industrial Processes

    Directory of Open Access Journals (Sweden)

    Fan Yang

    2012-01-01

    Full Text Available In large-scale industrial processes, a fault can easily propagate between process units due to the interconnections of material and information flows. Thus the problem of fault detection and isolation for these processes is more concerned about the root cause and fault propagation before applying quantitative methods in local models. Process topology and causality, as the key features of the process description, need to be captured from process knowledge and process data. The modelling methods from these two aspects are overviewed in this paper. From process knowledge, structural equation modelling, various causal graphs, rule-based models, and ontological models are summarized. From process data, cross-correlation analysis, Granger causality and its extensions, frequency domain methods, information-theoretical methods, and Bayesian nets are introduced. Based on these models, inference methods are discussed to find root causes and fault propagation paths under abnormal situations. Some future work is proposed in the end.

  18. Optimal acid digestion for multi-element analysis of different waste matrices

    DEFF Research Database (Denmark)

    Götze, Ramona; Astrup, Thomas Fruergaard

    of the distinct waste materials and recyclables. The purpose of this study is to evaluate the performance of different standardized microwave assisted acid digestion methods on waste samples and subsequent multi-element analysis. Six acid digestion methods were applied on a Paper & Cardboard and Composite waste...

  19. Global analysis of seagrass restoration: the importance of large-scale planting

    KAUST Repository

    van Katwijk, Marieke M.; Thorhaug, Anitra; Marbà , Nú ria; Orth, Robert J.; Duarte, Carlos M.; Kendrick, Gary A.; Althuizen, Inge H. J.; Balestri, Elena; Bernard, Guillaume; Cambridge, Marion L.; Cunha, Alexandra; Durance, Cynthia; Giesen, Wim; Han, Qiuying; Hosokawa, Shinya; Kiswara, Wawan; Komatsu, Teruhisa; Lardicci, Claudio; Lee, Kun-Seop; Meinesz, Alexandre; Nakaoka, Masahiro; O'Brien, Katherine R.; Paling, Erik I.; Pickerell, Chris; Ransijn, Aryan M. A.; Verduin, Jennifer J.

    2015-01-01

    In coastal and estuarine systems, foundation species like seagrasses, mangroves, saltmarshes or corals provide important ecosystem services. Seagrasses are globally declining and their reintroduction has been shown to restore ecosystem functions. However, seagrass restoration is often challenging, given the dynamic and stressful environment that seagrasses often grow in. From our world-wide meta-analysis of seagrass restoration trials (1786 trials), we describe general features and best practice for seagrass restoration. We confirm that removal of threats is important prior to replanting. Reduced water quality (mainly eutrophication), and construction activities led to poorer restoration success than, for instance, dredging, local direct impact and natural causes. Proximity to and recovery of donor beds were positively correlated with trial performance. Planting techniques can influence restoration success. The meta-analysis shows that both trial survival and seagrass population growth rate in trials that survived are positively affected by the number of plants or seeds initially transplanted. This relationship between restoration scale and restoration success was not related to trial characteristics of the initial restoration. The majority of the seagrass restoration trials have been very small, which may explain the low overall trial survival rate (i.e. estimated 37%). Successful regrowth of the foundation seagrass species appears to require crossing a minimum threshold of reintroduced individuals. Our study provides the first global field evidence for the requirement of a critical mass for recovery, which may also hold for other foundation species showing strong positive feedback to a dynamic environment. Synthesis and applications. For effective restoration of seagrass foundation species in its typically dynamic, stressful environment, introduction of large numbers is seen to be beneficial and probably serves two purposes. First, a large-scale planting

  20. Global analysis of seagrass restoration: the importance of large-scale planting

    KAUST Repository

    van Katwijk, Marieke M.

    2015-10-28

    In coastal and estuarine systems, foundation species like seagrasses, mangroves, saltmarshes or corals provide important ecosystem services. Seagrasses are globally declining and their reintroduction has been shown to restore ecosystem functions. However, seagrass restoration is often challenging, given the dynamic and stressful environment that seagrasses often grow in. From our world-wide meta-analysis of seagrass restoration trials (1786 trials), we describe general features and best practice for seagrass restoration. We confirm that removal of threats is important prior to replanting. Reduced water quality (mainly eutrophication), and construction activities led to poorer restoration success than, for instance, dredging, local direct impact and natural causes. Proximity to and recovery of donor beds were positively correlated with trial performance. Planting techniques can influence restoration success. The meta-analysis shows that both trial survival and seagrass population growth rate in trials that survived are positively affected by the number of plants or seeds initially transplanted. This relationship between restoration scale and restoration success was not related to trial characteristics of the initial restoration. The majority of the seagrass restoration trials have been very small, which may explain the low overall trial survival rate (i.e. estimated 37%). Successful regrowth of the foundation seagrass species appears to require crossing a minimum threshold of reintroduced individuals. Our study provides the first global field evidence for the requirement of a critical mass for recovery, which may also hold for other foundation species showing strong positive feedback to a dynamic environment. Synthesis and applications. For effective restoration of seagrass foundation species in its typically dynamic, stressful environment, introduction of large numbers is seen to be beneficial and probably serves two purposes. First, a large-scale planting

  1. Energy System Analysis of Large-Scale Integration of Wind Power

    International Nuclear Information System (INIS)

    Lund, Henrik

    2003-11-01

    The paper presents the results of two research projects conducted by Aalborg University and financed by the Danish Energy Research Programme. Both projects include the development of models and system analysis with focus on large-scale integration of wind power into different energy systems. Market reactions and ability to exploit exchange on the international market for electricity by locating exports in hours of high prices are included in the analyses. This paper focuses on results which are valid for energy systems in general. The paper presents the ability of different energy systems and regulation strategies to integrate wind power, The ability is expressed by three factors: One factor is the degree of electricity excess production caused by fluctuations in wind and CHP heat demands. The other factor is the ability to utilise wind power to reduce CO 2 emission in the system. And the third factor is the ability to benefit from exchange of electricity on the market. Energy systems and regulation strategies are analysed in the range of a wind power input from 0 to 100% of the electricity demand. Based on the Danish energy system, in which 50 per cent of the electricity demand is produced in CHP, a number of future energy systems with CO 2 reduction potentials are analysed, i.e. systems with more CHP, systems using electricity for transportation (battery or hydrogen vehicles) and systems with fuel-cell technologies. For the present and such potential future energy systems different regulation strategies have been analysed, i.e. the inclusion of small CHP plants into the regulation task of electricity balancing and grid stability and investments in electric heating, heat pumps and heat storage capacity. Also the potential of energy management has been analysed. The results of the analyses make it possible to compare short-term and long-term potentials of different strategies of large-scale integration of wind power

  2. Performance Evaluation of Hadoop-based Large-scale Network Traffic Analysis Cluster

    Directory of Open Access Journals (Sweden)

    Tao Ran

    2016-01-01

    Full Text Available As Hadoop has gained popularity in big data era, it is widely used in various fields. The self-design and self-developed large-scale network traffic analysis cluster works well based on Hadoop, with off-line applications running on it to analyze the massive network traffic data. On purpose of scientifically and reasonably evaluating the performance of analysis cluster, we propose a performance evaluation system. Firstly, we set the execution times of three benchmark applications as the benchmark of the performance, and pick 40 metrics of customized statistical resource data. Then we identify the relationship between the resource data and the execution times by a statistic modeling analysis approach, which is composed of principal component analysis and multiple linear regression. After training models by historical data, we can predict the execution times by current resource data. Finally, we evaluate the performance of analysis cluster by the validated predicting of execution times. Experimental results show that the predicted execution times by trained models are within acceptable error range, and the evaluation results of performance are accurate and reliable.

  3. Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.

    1987-01-01

    The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case

  4. Biochemical analysis of force-sensitive responses using a large-scale cell stretch device.

    Science.gov (United States)

    Renner, Derrick J; Ewald, Makena L; Kim, Timothy; Yamada, Soichiro

    2017-09-03

    Physical force has emerged as a key regulator of tissue homeostasis, and plays an important role in embryogenesis, tissue regeneration, and disease progression. Currently, the details of protein interactions under elevated physical stress are largely missing, therefore, preventing the fundamental, molecular understanding of mechano-transduction. This is in part due to the difficulty isolating large quantities of cell lysates exposed to force-bearing conditions for biochemical analysis. We designed a simple, easy-to-fabricate, large-scale cell stretch device for the analysis of force-sensitive cell responses. Using proximal biotinylation (BioID) analysis or phospho-specific antibodies, we detected force-sensitive biochemical changes in cells exposed to prolonged cyclic substrate stretch. For example, using promiscuous biotin ligase BirA* tagged α-catenin, the biotinylation of myosin IIA increased with stretch, suggesting the close proximity of myosin IIA to α-catenin under a force bearing condition. Furthermore, using phospho-specific antibodies, Akt phosphorylation was reduced upon stretch while Src phosphorylation was unchanged. Interestingly, phosphorylation of GSK3β, a downstream effector of Akt pathway, was also reduced with stretch, while the phosphorylation of other Akt effectors was unchanged. These data suggest that the Akt-GSK3β pathway is force-sensitive. This simple cell stretch device enables biochemical analysis of force-sensitive responses and has potential to uncover molecules underlying mechano-transduction.

  5. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    The fields of sensitivity and uncertainty analysis have traditionally been dominated by statistical techniques when large-scale modeling codes are being analyzed. These methods are able to estimate sensitivities, generate response surfaces, and estimate response probability distributions given the input parameter probability distributions. Because the statistical methods are computationally costly, they are usually applied only to problems with relatively small parameter sets. Deterministic methods, on the other hand, are very efficient and can handle large data sets, but generally require simpler models because of the considerable programming effort required for their implementation. The first part of this paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. This second part of the paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. This paper is applicable to low-level radioactive waste disposal system performance assessment

  6. Sampling based uncertainty analysis of 10% hot leg break LOCA in large scale test facility

    International Nuclear Information System (INIS)

    Sengupta, Samiran; Kraina, V.; Dubey, S. K.; Rao, R. S.; Gupta, S. K.

    2010-01-01

    Sampling based uncertainty analysis was carried out to quantify uncertainty in predictions of best estimate code RELAP5/MOD3.2 for a thermal hydraulic test (10% hot leg break LOCA) performed in the Large Scale Test Facility (LSTF) as a part of an IAEA coordinated research project. The nodalisation of the test facility was qualified for both steady state and transient level by systematically applying the procedures led by uncertainty methodology based on accuracy extrapolation (UMAE); uncertainty analysis was carried out using the Latin hypercube sampling (LHS) method to evaluate uncertainty for ten input parameters. Sixteen output parameters were selected for uncertainty evaluation and uncertainty band between 5 th and 95 th percentile of the output parameters were evaluated. It was observed that the uncertainty band for the primary pressure during two phase blowdown is larger than that of the remaining period. Similarly, a larger uncertainty band is observed relating to accumulator injection flow during reflood phase. Importance analysis was also carried out and standard rank regression coefficients were computed to quantify the effect of each individual input parameter on output parameters. It was observed that the break discharge coefficient is the most important uncertain parameter relating to the prediction of all the primary side parameters and that the steam generator (SG) relief pressure setting is the most important parameter in predicting the SG secondary pressure

  7. Algorithms for large scale singular value analysis of spatially variant tomography systems

    International Nuclear Information System (INIS)

    Cao-Huu, Tuan; Brownell, G.; Lachiver, G.

    1996-01-01

    The problem of determining the eigenvalues of large matrices occurs often in the design and analysis of modem tomography systems. As there is an interest in solving systems containing an ever-increasing number of variables, current research effort is being made to create more robust solvers which do not depend on some special feature of the matrix for convergence (e.g. block circulant), and to improve the speed of already known and understood solvers so that solving even larger systems in a reasonable time becomes viable. Our standard techniques for singular value analysis are based on sparse matrix factorization and are not applicable when the input matrices are large because the algorithms cause too much fill. Fill refers to the increase of non-zero elements in the LU decomposition of the original matrix A (the system matrix). So we have developed iterative solutions that are based on sparse direct methods. Data motion and preconditioning techniques are critical for performance. This conference paper describes our algorithmic approaches for large scale singular value analysis of spatially variant imaging systems, and in particular of PCR2, a cylindrical three-dimensional PET imager 2 built at the Massachusetts General Hospital (MGH) in Boston. We recommend the desirable features and challenges for the next generation of parallel machines for optimal performance of our solver

  8. Visual Iconicity Across Sign Languages: Large-Scale Automated Video Analysis of Iconic Articulators and Locations

    Science.gov (United States)

    Östling, Robert; Börstell, Carl; Courtaux, Servane

    2018-01-01

    We use automatic processing of 120,000 sign videos in 31 different sign languages to show a cross-linguistic pattern for two types of iconic form–meaning relationships in the visual modality. First, we demonstrate that the degree of inherent plurality of concepts, based on individual ratings by non-signers, strongly correlates with the number of hands used in the sign forms encoding the same concepts across sign languages. Second, we show that certain concepts are iconically articulated around specific parts of the body, as predicted by the associational intuitions by non-signers. The implications of our results are both theoretical and methodological. With regard to theoretical implications, we corroborate previous research by demonstrating and quantifying, using a much larger material than previously available, the iconic nature of languages in the visual modality. As for the methodological implications, we show how automatic methods are, in fact, useful for performing large-scale analysis of sign language data, to a high level of accuracy, as indicated by our manual error analysis.

  9. Analysis of a large-scale weighted network of one-to-one human communication

    International Nuclear Information System (INIS)

    Onnela, Jukka-Pekka; Saramaeki, Jari; Hyvoenen, Joerkki; Szabo, Gabor; Menezes, M Argollo de; Kaski, Kimmo; Barabasi, Albert-Laszlo; Kertesz, Janos

    2007-01-01

    We construct a connected network of 3.9 million nodes from mobile phone call records, which can be regarded as a proxy for the underlying human communication network at the societal level. We assign two weights on each edge to reflect the strength of social interaction, which are the aggregate call duration and the cumulative number of calls placed between the individuals over a period of 18 weeks. We present a detailed analysis of this weighted network by examining its degree, strength, and weight distributions, as well as its topological assortativity and weighted assortativity, clustering and weighted clustering, together with correlations between these quantities. We give an account of motif intensity and coherence distributions and compare them to a randomized reference system. We also use the concept of link overlap to measure the number of common neighbours any two adjacent nodes have, which serves as a useful local measure for identifying the interconnectedness of communities. We report a positive correlation between the overlap and weight of a link, thus providing strong quantitative evidence for the weak ties hypothesis, a central concept in social network analysis. The percolation properties of the network are found to depend on the type and order of removed links, and they can help understand how the local structure of the network manifests itself at the global level. We hope that our results will contribute to modelling weighted large-scale social networks, and believe that the systematic approach followed here can be adopted to study other weighted networks

  10. Measuring α in the early universe: CMB temperature, large-scale structure, and Fisher matrix analysis

    International Nuclear Information System (INIS)

    Martins, C. J. A. P.; Melchiorri, A.; Trotta, R.; Bean, R.; Rocha, G.; Avelino, P. P.; Viana, P. T. P.

    2002-01-01

    We extend our recent work on the effects of a time-varying fine-structure constant α in the cosmic microwave background by providing a thorough analysis of the degeneracies between α and the other cosmological parameters, and discussing ways to break these with both existing and/or forthcoming data. In particular, we present the state-of-the-art cosmic microwave background constraints on α through a combined analysis of the BOOMERanG, MAXIMA and DASI data sets. We also present a novel discussion of the constraints on α coming from large-scale structure observations, focusing in particular on the power spectrum from the 2dF survey. Our results are consistent with no variation in α from the epoch of recombination to the present day, and restrict any such variation to be less than about 4%. We show that the forthcoming Microwave Anisotropy Probe and Planck experiments will be able to break most of the currently existing degeneracies between α and other parameters, and measure α to better than percent accuracy

  11. Large-scale Granger causality analysis on resting-state functional MRI

    Science.gov (United States)

    D'Souza, Adora M.; Abidin, Anas Zainul; Leistritz, Lutz; Wismüller, Axel

    2016-03-01

    We demonstrate an approach to measure the information flow between each pair of time series in resting-state functional MRI (fMRI) data of the human brain and subsequently recover its underlying network structure. By integrating dimensionality reduction into predictive time series modeling, large-scale Granger Causality (lsGC) analysis method can reveal directed information flow suggestive of causal influence at an individual voxel level, unlike other multivariate approaches. This method quantifies the influence each voxel time series has on every other voxel time series in a multivariate sense and hence contains information about the underlying dynamics of the whole system, which can be used to reveal functionally connected networks within the brain. To identify such networks, we perform non-metric network clustering, such as accomplished by the Louvain method. We demonstrate the effectiveness of our approach to recover the motor and visual cortex from resting state human brain fMRI data and compare it with the network recovered from a visuomotor stimulation experiment, where the similarity is measured by the Dice Coefficient (DC). The best DC obtained was 0.59 implying a strong agreement between the two networks. In addition, we thoroughly study the effect of dimensionality reduction in lsGC analysis on network recovery. We conclude that our approach is capable of detecting causal influence between time series in a multivariate sense, which can be used to segment functionally connected networks in the resting-state fMRI.

  12. Managing Large Scale Project Analysis Teams through a Web Accessible Database

    Science.gov (United States)

    O'Neil, Daniel A.

    2008-01-01

    Large scale space programs analyze thousands of requirements while mitigating safety, performance, schedule, and cost risks. These efforts involve a variety of roles with interdependent use cases and goals. For example, study managers and facilitators identify ground-rules and assumptions for a collection of studies required for a program or project milestone. Task leaders derive product requirements from the ground rules and assumptions and describe activities to produce needed analytical products. Disciplined specialists produce the specified products and load results into a file management system. Organizational and project managers provide the personnel and funds to conduct the tasks. Each role has responsibilities to establish information linkages and provide status reports to management. Projects conduct design and analysis cycles to refine designs to meet the requirements and implement risk mitigation plans. At the program level, integrated design and analysis cycles studies are conducted to eliminate every 'to-be-determined' and develop plans to mitigate every risk. At the agency level, strategic studies analyze different approaches to exploration architectures and campaigns. This paper describes a web-accessible database developed by NASA to coordinate and manage tasks at three organizational levels. Other topics in this paper cover integration technologies and techniques for process modeling and enterprise architectures.

  13. Analysis of a large-scale weighted network of one-to-one human communication

    Science.gov (United States)

    Onnela, Jukka-Pekka; Saramäki, Jari; Hyvönen, Jörkki; Szabó, Gábor; Argollo de Menezes, M.; Kaski, Kimmo; Barabási, Albert-László; Kertész, János

    2007-06-01

    We construct a connected network of 3.9 million nodes from mobile phone call records, which can be regarded as a proxy for the underlying human communication network at the societal level. We assign two weights on each edge to reflect the strength of social interaction, which are the aggregate call duration and the cumulative number of calls placed between the individuals over a period of 18 weeks. We present a detailed analysis of this weighted network by examining its degree, strength, and weight distributions, as well as its topological assortativity and weighted assortativity, clustering and weighted clustering, together with correlations between these quantities. We give an account of motif intensity and coherence distributions and compare them to a randomized reference system. We also use the concept of link overlap to measure the number of common neighbours any two adjacent nodes have, which serves as a useful local measure for identifying the interconnectedness of communities. We report a positive correlation between the overlap and weight of a link, thus providing strong quantitative evidence for the weak ties hypothesis, a central concept in social network analysis. The percolation properties of the network are found to depend on the type and order of removed links, and they can help understand how the local structure of the network manifests itself at the global level. We hope that our results will contribute to modelling weighted large-scale social networks, and believe that the systematic approach followed here can be adopted to study other weighted networks.

  14. Analysis of a large-scale weighted network of one-to-one human communication

    Energy Technology Data Exchange (ETDEWEB)

    Onnela, Jukka-Pekka [Laboratory of Computational Engineering, Helsinki University of Technology (Finland); Saramaeki, Jari [Laboratory of Computational Engineering, Helsinki University of Technology (Finland); Hyvoenen, Joerkki [Laboratory of Computational Engineering, Helsinki University of Technology (Finland); Szabo, Gabor [Department of Physdics and Center for Complex Networks Research, University of Notre Dame, IN (United States); Menezes, M Argollo de [Department of Physdics and Center for Complex Networks Research, University of Notre Dame, IN (United States); Kaski, Kimmo [Laboratory of Computational Engineering, Helsinki University of Technology (Finland); Barabasi, Albert-Laszlo [Department of Physdics and Center for Complex Networks Research, University of Notre Dame, IN (United States); Kertesz, Janos [Laboratory of Computational Engineering, Helsinki University of Technology (Finland)

    2007-06-15

    We construct a connected network of 3.9 million nodes from mobile phone call records, which can be regarded as a proxy for the underlying human communication network at the societal level. We assign two weights on each edge to reflect the strength of social interaction, which are the aggregate call duration and the cumulative number of calls placed between the individuals over a period of 18 weeks. We present a detailed analysis of this weighted network by examining its degree, strength, and weight distributions, as well as its topological assortativity and weighted assortativity, clustering and weighted clustering, together with correlations between these quantities. We give an account of motif intensity and coherence distributions and compare them to a randomized reference system. We also use the concept of link overlap to measure the number of common neighbours any two adjacent nodes have, which serves as a useful local measure for identifying the interconnectedness of communities. We report a positive correlation between the overlap and weight of a link, thus providing strong quantitative evidence for the weak ties hypothesis, a central concept in social network analysis. The percolation properties of the network are found to depend on the type and order of removed links, and they can help understand how the local structure of the network manifests itself at the global level. We hope that our results will contribute to modelling weighted large-scale social networks, and believe that the systematic approach followed here can be adopted to study other weighted networks.

  15. Large-scale offshore wind energy. Cost analysis and integration in the Dutch electricity market

    International Nuclear Information System (INIS)

    De Noord, M.

    1999-02-01

    The results of analysis of the construction and integration costs of large-scale offshore wind energy (OWE) farms in 2010 are presented. The integration of these farms (1 and 5 GW) in the Dutch electricity distribution system have been regarded against the background of a liberalised electricity market. A first step is taken for the determination of costs involved in solving integration problems. Three different types of foundations are examined: the mono-pile, the jacket and a new type of foundation: the concrete caisson pile: all single-turbine-single-support structures. For real offshore applications (>10 km offshore, at sea-depths >20 m), the concrete caisson pile is regarded as the most suitable. The price/power ratios of wind turbines are analysed. It is assumed that in 2010 turbines in the power range of 3-5 MW are available. The main calculations have been conducted for a 3 MW turbine. The main choice in electrical infrastructure is for AC or DC. Calculations show that at distances of 30 km offshore and more, the use of HVDC will result in higher initial costs but lower operating costs. The share of operating and maintenance (O ampersand M) costs in the kWh cost price is approximately 3.3%. To be able to compare the two farms, a base case is derived with a construction time of 10 years for both. The energy yield is calculated for a wind regime offshore of 9.0 m/s annual mean wind speed. Per 3 MW turbine this results in an annual energy production of approximately 12 GWh. The total farm efficiency amounts to 82%, resulting in a total farm capacity factor of 38%. With a required internal rate of return of 15%, the kWh cost price amounts to 0.24 DFl and 0.21 DFl for the 1 GW and 5 GW farms respectively in the base case. The required internal rate of return has a large effect on the kWh cost price, followed by costs of subsystems. O ampersand M costs have little effect on the cost price. Parameter studies show that a small cost reduction of 5% is possible when

  16. A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems

    Energy Technology Data Exchange (ETDEWEB)

    Wan, Lipeng [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wang, Feiyi [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Oral, H. Sarp [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cao, Qing [Univ. of Tennessee, Knoxville, TN (United States)

    2014-11-01

    High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storage systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results

  17. Large-scale phylogenomic analysis resolves a backbone phylogeny in ferns

    Science.gov (United States)

    Shen, Hui; Jin, Dongmei; Shu, Jiang-Ping; Zhou, Xi-Le; Lei, Ming; Wei, Ran; Shang, Hui; Wei, Hong-Jin; Zhang, Rui; Liu, Li; Gu, Yu-Feng; Zhang, Xian-Chun; Yan, Yue-Hong

    2018-01-01

    Abstract Background Ferns, originated about 360 million years ago, are the sister group of seed plants. Despite the remarkable progress in our understanding of fern phylogeny, with conflicting molecular evidence and different morphological interpretations, relationships among major fern lineages remain controversial. Results With the aim to obtain a robust fern phylogeny, we carried out a large-scale phylogenomic analysis using high-quality transcriptome sequencing data, which covered 69 fern species from 38 families and 11 orders. Both coalescent-based and concatenation-based methods were applied to both nucleotide and amino acid sequences in species tree estimation. The resulting topologies are largely congruent with each other, except for the placement of Angiopteris fokiensis, Cheiropleuria bicuspis, Diplaziopsis brunoniana, Matteuccia struthiopteris, Elaphoglossum mcclurei, and Tectaria subpedata. Conclusions Our result confirmed that Equisetales is sister to the rest of ferns, and Dennstaedtiaceae is sister to eupolypods. Moreover, our result strongly supported some relationships different from the current view of fern phylogeny, including that Marattiaceae may be sister to the monophyletic clade of Psilotaceae and Ophioglossaceae; that Gleicheniaceae and Hymenophyllaceae form a monophyletic clade sister to Dipteridaceae; and that Aspleniaceae is sister to the rest of the groups in eupolypods II. These results were interpreted with morphological traits, especially sporangia characters, and a new evolutionary route of sporangial annulus in ferns was suggested. This backbone phylogeny in ferns sets a foundation for further studies in biology and evolution in ferns, and therefore in plants. PMID:29186447

  18. Large-scale phylogenomic analysis resolves a backbone phylogeny in ferns.

    Science.gov (United States)

    Shen, Hui; Jin, Dongmei; Shu, Jiang-Ping; Zhou, Xi-Le; Lei, Ming; Wei, Ran; Shang, Hui; Wei, Hong-Jin; Zhang, Rui; Liu, Li; Gu, Yu-Feng; Zhang, Xian-Chun; Yan, Yue-Hong

    2018-02-01

    Ferns, originated about 360 million years ago, are the sister group of seed plants. Despite the remarkable progress in our understanding of fern phylogeny, with conflicting molecular evidence and different morphological interpretations, relationships among major fern lineages remain controversial. With the aim to obtain a robust fern phylogeny, we carried out a large-scale phylogenomic analysis using high-quality transcriptome sequencing data, which covered 69 fern species from 38 families and 11 orders. Both coalescent-based and concatenation-based methods were applied to both nucleotide and amino acid sequences in species tree estimation. The resulting topologies are largely congruent with each other, except for the placement of Angiopteris fokiensis, Cheiropleuria bicuspis, Diplaziopsis brunoniana, Matteuccia struthiopteris, Elaphoglossum mcclurei, and Tectaria subpedata. Our result confirmed that Equisetales is sister to the rest of ferns, and Dennstaedtiaceae is sister to eupolypods. Moreover, our result strongly supported some relationships different from the current view of fern phylogeny, including that Marattiaceae may be sister to the monophyletic clade of Psilotaceae and Ophioglossaceae; that Gleicheniaceae and Hymenophyllaceae form a monophyletic clade sister to Dipteridaceae; and that Aspleniaceae is sister to the rest of the groups in eupolypods II. These results were interpreted with morphological traits, especially sporangia characters, and a new evolutionary route of sporangial annulus in ferns was suggested. This backbone phylogeny in ferns sets a foundation for further studies in biology and evolution in ferns, and therefore in plants. © The Authors 2017. Published by Oxford University Press.

  19. Dynamics of large-scale solar wind streams obtained by the double superposed epoch analysis

    Science.gov (United States)

    Yermolaev, Yu. I.; Lodkina, I. G.; Nikolaeva, N. S.; Yermolaev, M. Yu.

    2015-09-01

    Using the OMNI data for period 1976-2000, we investigate the temporal profiles of 20 plasma and field parameters in the disturbed large-scale types of solar wind (SW): corotating interaction regions (CIR), interplanetary coronal mass ejections (ICME) (both magnetic cloud (MC) and Ejecta), and Sheath as well as the interplanetary shock (IS). To take into account the different durations of SW types, we use the double superposed epoch analysis (DSEA) method: rescaling the duration of the interval for all types in such a manner that, respectively, beginning and end for all intervals of selected type coincide. As the analyzed SW types can interact with each other and change parameters as a result of such interaction, we investigate separately eights sequences of SW types: (1) CIR, (2) IS/CIR, (3) Ejecta, (4) Sheath/Ejecta, (5) IS/Sheath/Ejecta, (6) MC, (7) Sheath/MC, and (8) IS/Sheath/MC. The main conclusion is that the behavior of parameters in Sheath and in CIR are very similar both qualitatively and quantitatively. Both the high-speed stream (HSS) and the fast ICME play a role of pistons which push the plasma located ahead them. The increase of speed in HSS and ICME leads at first to formation of compression regions (CIR and Sheath, respectively) and then to IS. The occurrence of compression regions and IS increases the probability of growth of magnetospheric activity.

  20. BREEDER: a microcomputer program for financial analysis of a large-scale prototype breeder reactor

    International Nuclear Information System (INIS)

    Giese, R.F.

    1984-04-01

    This report describes a microcomputer-based, single-project financial analysis program: BREEDER. BREEDER is a user-friendly model designed to facilitate frequent and rapid analyses of the financial implications associated with alternative design and financing strategies for electric generating plants and large-scale prototype breeder (LSPB) reactors in particular. The model has proved to be a useful tool in establishing cost goals for LSPB reactors. The program is available on floppy disks for use on an IBM personal computer (or IBM look-a-like) running under PC-DOS or a Kaypro II transportable computer running under CP/M (and many other CP/M machines). The report documents version 1.5 of BREEDER and contains a user's guide. The report also includes a general overview of BREEDER, a summary of hardware requirements, a definition of all required program inputs, a description of all algorithms used in performing the construction-period and operation-period analyses, and a summary of all available reports. The appendixes contain a complete source-code listing, a cross-reference table, a sample interactive session, several sample runs, and additional documentation of the net-equity program option

  1. Static analysis of large-scale multibody system using joint coordinates and spatial algebra operator.

    Science.gov (United States)

    Omar, Mohamed A

    2014-01-01

    Initial transient oscillations inhibited in the dynamic simulations responses of multibody systems can lead to inaccurate results, unrealistic load prediction, or simulation failure. These transients could result from incompatible initial conditions, initial constraints violation, and inadequate kinematic assembly. Performing static equilibrium analysis before the dynamic simulation can eliminate these transients and lead to stable simulation. Most exiting multibody formulations determine the static equilibrium position by minimizing the system potential energy. This paper presents a new general purpose approach for solving the static equilibrium in large-scale articulated multibody. The proposed approach introduces an energy drainage mechanism based on Baumgarte constraint stabilization approach to determine the static equilibrium position. The spatial algebra operator is used to express the kinematic and dynamic equations of the closed-loop multibody system. The proposed multibody system formulation utilizes the joint coordinates and modal elastic coordinates as the system generalized coordinates. The recursive nonlinear equations of motion are formulated using the Cartesian coordinates and the joint coordinates to form an augmented set of differential algebraic equations. Then system connectivity matrix is derived from the system topological relations and used to project the Cartesian quantities into the joint subspace leading to minimum set of differential equations.

  2. A large scale analysis of information-theoretic network complexity measures using chemical structures.

    Directory of Open Access Journals (Sweden)

    Matthias Dehmer

    Full Text Available This paper aims to investigate information-theoretic network complexity measures which have already been intensely used in mathematical- and medicinal chemistry including drug design. Numerous such measures have been developed so far but many of them lack a meaningful interpretation, e.g., we want to examine which kind of structural information they detect. Therefore, our main contribution is to shed light on the relatedness between some selected information measures for graphs by performing a large scale analysis using chemical networks. Starting from several sets containing real and synthetic chemical structures represented by graphs, we study the relatedness between a classical (partition-based complexity measure called the topological information content of a graph and some others inferred by a different paradigm leading to partition-independent measures. Moreover, we evaluate the uniqueness of network complexity measures numerically. Generally, a high uniqueness is an important and desirable property when designing novel topological descriptors having the potential to be applied to large chemical databases.

  3. On the rejection-based algorithm for simulation and analysis of large-scale reaction networks

    Energy Technology Data Exchange (ETDEWEB)

    Thanh, Vo Hong, E-mail: vo@cosbi.eu [The Microsoft Research-University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Zunino, Roberto, E-mail: roberto.zunino@unitn.it [Department of Mathematics, University of Trento, Trento (Italy); Priami, Corrado, E-mail: priami@cosbi.eu [The Microsoft Research-University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Department of Mathematics, University of Trento, Trento (Italy)

    2015-06-28

    Stochastic simulation for in silico studies of large biochemical networks requires a great amount of computational time. We recently proposed a new exact simulation algorithm, called the rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)], to improve simulation performance by postponing and collapsing as much as possible the propensity updates. In this paper, we analyze the performance of this algorithm in detail, and improve it for simulating large-scale biochemical reaction networks. We also present a new algorithm, called simultaneous RSSA (SRSSA), which generates many independent trajectories simultaneously for the analysis of the biochemical behavior. SRSSA improves simulation performance by utilizing a single data structure across simulations to select reaction firings and forming trajectories. The memory requirement for building and storing the data structure is thus independent of the number of trajectories. The updating of the data structure when needed is performed collectively in a single operation across the simulations. The trajectories generated by SRSSA are exact and independent of each other by exploiting the rejection-based mechanism. We test our new improvement on real biological systems with a wide range of reaction networks to demonstrate its applicability and efficiency.

  4. The PREP Pipeline: Standardized preprocessing for large-scale EEG analysis

    Directory of Open Access Journals (Sweden)

    Nima eBigdelys Shamlo

    2015-06-01

    Full Text Available The technology to collect brain imaging and physiological measures has become portable and ubiquitous, opening the possibility of large-scale analysis of real-world human imaging. By its nature, such data is large and complex, making automated processing essential. This paper shows how lack of attention to the very early stages of an EEG preprocessing pipeline can reduce the signal-to-noise ratio and introduce unwanted artifacts into the data, particularly for computations done in single precision. We demonstrate that ordinary average referencing improves the signal-to-noise ratio, but that noisy channels can contaminate the results. We also show that identification of noisy channels depends on the reference and examine the complex interaction of filtering, noisy channel identification, and referencing. We introduce a multi-stage robust referencing scheme to deal with the noisy channel-reference interaction. We propose a standardized early-stage EEG processing pipeline (PREP and discuss the application of the pipeline to more than 600 EEG datasets. The pipeline includes an automatically generated report for each dataset processed. Users can download the PREP pipeline as a freely available MATLAB library from http://eegstudy.org/prepcode/.

  5. Dynamics of Disagreement: Large-Scale Temporal Network Analysis Reveals Negative Interactions in Online Collaboration

    Science.gov (United States)

    Tsvetkova, Milena; García-Gavilanes, Ruth; Yasseri, Taha

    2016-11-01

    Disagreement and conflict are a fact of social life. However, negative interactions are rarely explicitly declared and recorded and this makes them hard for scientists to study. In an attempt to understand the structural and temporal features of negative interactions in the community, we use complex network methods to analyze patterns in the timing and configuration of reverts of article edits to Wikipedia. We investigate how often and how fast pairs of reverts occur compared to a null model in order to control for patterns that are natural to the content production or are due to the internal rules of Wikipedia. Our results suggest that Wikipedia editors systematically revert the same person, revert back their reverter, and come to defend a reverted editor. We further relate these interactions to the status of the involved editors. Even though the individual reverts might not necessarily be negative social interactions, our analysis points to the existence of certain patterns of negative social dynamics within the community of editors. Some of these patterns have not been previously explored and carry implications for the knowledge collection practice conducted on Wikipedia. Our method can be applied to other large-scale temporal collaboration networks to identify the existence of negative social interactions and other social processes.

  6. The PREP pipeline: standardized preprocessing for large-scale EEG analysis.

    Science.gov (United States)

    Bigdely-Shamlo, Nima; Mullen, Tim; Kothe, Christian; Su, Kyung-Min; Robbins, Kay A

    2015-01-01

    The technology to collect brain imaging and physiological measures has become portable and ubiquitous, opening the possibility of large-scale analysis of real-world human imaging. By its nature, such data is large and complex, making automated processing essential. This paper shows how lack of attention to the very early stages of an EEG preprocessing pipeline can reduce the signal-to-noise ratio and introduce unwanted artifacts into the data, particularly for computations done in single precision. We demonstrate that ordinary average referencing improves the signal-to-noise ratio, but that noisy channels can contaminate the results. We also show that identification of noisy channels depends on the reference and examine the complex interaction of filtering, noisy channel identification, and referencing. We introduce a multi-stage robust referencing scheme to deal with the noisy channel-reference interaction. We propose a standardized early-stage EEG processing pipeline (PREP) and discuss the application of the pipeline to more than 600 EEG datasets. The pipeline includes an automatically generated report for each dataset processed. Users can download the PREP pipeline as a freely available MATLAB library from http://eegstudy.org/prepcode.

  7. Dynamic Modeling and Analysis of the Large-Scale Rotary Machine with Multi-Supporting

    Directory of Open Access Journals (Sweden)

    Xuejun Li

    2011-01-01

    Full Text Available The large-scale rotary machine with multi-supporting, such as rotary kiln and rope laying machine, is the key equipment in the architectural, chemistry, and agriculture industries. The body, rollers, wheels, and bearings constitute a chain multibody system. Axis line deflection is a vital parameter to determine mechanics state of rotary machine, thus body axial vibration needs to be studied for dynamic monitoring and adjusting of rotary machine. By using the Riccati transfer matrix method, the body system of rotary machine is divided into many subsystems composed of three elements, namely, rigid disk, elastic shaft, and linear spring. Multiple wheel-bearing structures are simplified as springs. The transfer matrices of the body system and overall transfer equation are developed, as well as the response overall motion equation. Taken a rotary kiln as an instance, natural frequencies, modal shape, and response vibration with certain exciting axis line deflection are obtained by numerical computing. The body vibration modal curves illustrate the cause of dynamical errors in the common axis line measurement methods. The displacement response can be used for further measurement dynamical error analysis and compensation. The response overall motion equation could be applied to predict the body motion under abnormal mechanics condition, and provide theory guidance for machine failure diagnosis.

  8. Multi-element analysis of emeralds and associated rocks by k0 neutron activation analysis

    International Nuclear Information System (INIS)

    Acharya, R.N.; Mondal, R.K.; Burte, P.P.; Nair, A.G.C.; Reddy, N.B.Y.; Reddy, L.K.; Reddy, A.V.R.; Manohar, S.B.

    2000-01-01

    Multi-element analysis was carried out in natural emeralds, their associated rocks and one sample of beryl obtained from Rajasthan, India. The concentrations of 21 elements were assayed by Instrumental Neutron Activation Analysis using the k 0 method (k 0 INAA method) and high-resolution gamma ray spectrometry. The data reveal the segregation of some elements from associated (trapped and host) rocks to the mineral beryl forming the gemstones. A reference rock standard of the US Geological Survey (USGS BCR-1) was also analysed as a control of the method

  9. Procedures for multielement analysis using high-flux fast-neutron activation

    International Nuclear Information System (INIS)

    Williams, R.E.; Hopke, P.K.; Meyer, R.A.

    1981-06-01

    Improvements have been made in the rabbit system used for multi-element fast-neutron activation analysis at the Lawrence Livermore National Laboratory Rotating Target Neutron Source, RTNS-I. Procedures have been developed for the analysis of 20 to 25 elements in samples with an inorganic matrix and 10 to 15 elements in biological samples, without the need for prohibitively expensive, long irradiations. Results are presented for the analysis of fly ash, orchard leaves, and bovine liver

  10. Analysis for Large Scale Integration of Electric Vehicles into Power Grids

    DEFF Research Database (Denmark)

    Hu, Weihao; Chen, Zhe; Wang, Xiaoru

    2011-01-01

    Electric Vehicles (EVs) provide a significant opportunity for reducing the consumption of fossil energies and the emission of carbon dioxide. With more and more electric vehicles integrated in the power systems, it becomes important to study the effects of EV integration on the power systems......, especially the low and middle voltage level networks. In the paper, the basic structure and characteristics of the electric vehicles are introduced. The possible impacts of large scale integration of electric vehicles on the power systems especially the advantage to the integration of the renewable energies...... are discussed. Finally, the research projects related to the large scale integration of electric vehicles into the power systems are introduced, it will provide reference for large scale integration of Electric Vehicles into power grids....

  11. On multielement analysis of biological samples with the aid of neutron activation

    International Nuclear Information System (INIS)

    Iyengar, G.V.

    1980-01-01

    A main objective of this study was elucidation of problems of sampling and sample preparation methods for multielement analysis of environmental and biological specimens. Another was assessment of the potentials of multielement neutron activation analysis (NAA) in environmental and biological research. In an attempt to explain the great differences in the elemental concentration ranges between biopsy and autopsy samples as reported in the literature, it was shown that post mortem changes induce great variations in the apparent elemental composition of autopsy specimens resulting in serious systematic errors. Applications of NAA to analysis of tissues of experimental animals, human tissues in health and disease, and environmental samples are illustrated with several examples. The suitability of NAA for routine analysis of elements such as Cr, Mo and Se, which are difficult to determine by other methods has been specially discussed. (author)

  12. Analysis of ground response data at Lotung large-scale soil- structure interaction experiment site

    International Nuclear Information System (INIS)

    Chang, C.Y.; Mok, C.M.; Power, M.S.

    1991-12-01

    The Electric Power Research Institute (EPRI), in cooperation with the Taiwan Power Company (TPC), constructed two models (1/4-scale and 1/2-scale) of a nuclear plant containment structure at a site in Lotung (Tang, 1987), a seismically active region in northeast Taiwan. The models were constructed to gather data for the evaluation and validation of soil-structure interaction (SSI) analysis methodologies. Extensive instrumentation was deployed to record both structural and ground responses at the site during earthquakes. The experiment is generally referred to as the Lotung Large-Scale Seismic Test (LSST). As part of the LSST, two downhole arrays were installed at the site to record ground motions at depths as well as at the ground surface. Structural response and ground response have been recorded for a number of earthquakes (i.e. a total of 18 earthquakes in the period of October 1985 through November 1986) at the LSST site since the completion of the installation of the downhole instruments in October 1985. These data include those from earthquakes having magnitudes ranging from M L 4.5 to M L 7.0 and epicentral distances range from 4.7 km to 77.7 km. Peak ground surface accelerations range from 0.03 g to 0.21 g for the horizontal component and from 0.01 g to 0.20 g for the vertical component. The objectives of the study were: (1) to obtain empirical data on variations of earthquake ground motion with depth; (2) to examine field evidence of nonlinear soil response due to earthquake shaking and to determine the degree of soil nonlinearity; (3) to assess the ability of ground response analysis techniques including techniques to approximate nonlinear soil response to estimate ground motions due to earthquake shaking; and (4) to analyze earth pressures recorded beneath the basemat and on the side wall of the 1/4 scale model structure during selected earthquakes

  13. FuncTree: Functional Analysis and Visualization for Large-Scale Omics Data.

    Directory of Open Access Journals (Sweden)

    Takeru Uchiyama

    Full Text Available Exponential growth of high-throughput data and the increasing complexity of omics information have been making processing and interpreting biological data an extremely difficult and daunting task. Here we developed FuncTree (http://bioviz.tokyo/functree, a web-based application for analyzing and visualizing large-scale omics data, including but not limited to genomic, metagenomic, and transcriptomic data. FuncTree allows user to map their omics data onto the "Functional Tree map", a predefined circular dendrogram, which represents the hierarchical relationship of all known biological functions defined in the KEGG database. This novel visualization method allows user to overview the broad functionality of their data, thus allowing a more accurate and comprehensive understanding of the omics information. FuncTree provides extensive customization and calculation methods to not only allow user to directly map their omics data to identify the functionality of their data, but also to compute statistically enriched functions by comparing it to other predefined omics data. We have validated FuncTree's analysis and visualization capability by mapping pan-genomic data of three different types of bacterial genera, metagenomic data of the human gut, and transcriptomic data of two different types of human cell expression. All three mapping strongly confirms FuncTree's capability to analyze and visually represent key functional feature of the omics data. We believe that FuncTree's capability to conduct various functional calculations and visualizing the result into a holistic overview of biological function, would make it an integral analysis/visualization tool for extensive omics base research.

  14. PGen: large-scale genomic variations analysis workflow and browser in SoyKB.

    Science.gov (United States)

    Liu, Yang; Khan, Saad M; Wang, Juexin; Rynge, Mats; Zhang, Yuanxun; Zeng, Shuai; Chen, Shiyuan; Maldonado Dos Santos, Joao V; Valliyodan, Babu; Calyam, Prasad P; Merchant, Nirav; Nguyen, Henry T; Xu, Dong; Joshi, Trupti

    2016-10-06

    With the advances in next-generation sequencing (NGS) technology and significant reductions in sequencing costs, it is now possible to sequence large collections of germplasm in crops for detecting genome-scale genetic variations and to apply the knowledge towards improvements in traits. To efficiently facilitate large-scale NGS resequencing data analysis of genomic variations, we have developed "PGen", an integrated and optimized workflow using the Extreme Science and Engineering Discovery Environment (XSEDE) high-performance computing (HPC) virtual system, iPlant cloud data storage resources and Pegasus workflow management system (Pegasus-WMS). The workflow allows users to identify single nucleotide polymorphisms (SNPs) and insertion-deletions (indels), perform SNP annotations and conduct copy number variation analyses on multiple resequencing datasets in a user-friendly and seamless way. We have developed both a Linux version in GitHub ( https://github.com/pegasus-isi/PGen-GenomicVariations-Workflow ) and a web-based implementation of the PGen workflow integrated within the Soybean Knowledge Base (SoyKB), ( http://soykb.org/Pegasus/index.php ). Using PGen, we identified 10,218,140 single-nucleotide polymorphisms (SNPs) and 1,398,982 indels from analysis of 106 soybean lines sequenced at 15X coverage. 297,245 non-synonymous SNPs and 3330 copy number variation (CNV) regions were identified from this analysis. SNPs identified using PGen from additional soybean resequencing projects adding to 500+ soybean germplasm lines in total have been integrated. These SNPs are being utilized for trait improvement using genotype to phenotype prediction approaches developed in-house. In order to browse and access NGS data easily, we have also developed an NGS resequencing data browser ( http://soykb.org/NGS_Resequence/NGS_index.php ) within SoyKB to provide easy access to SNP and downstream analysis results for soybean researchers. PGen workflow has been optimized for the most

  15. Large-scale functional MRI analysis to accumulate knowledge on brain functions

    International Nuclear Information System (INIS)

    Schwartz, Yannick

    2015-01-01

    How can we accumulate knowledge on brain functions? How can we leverage years of research in functional MRI to analyse finer-grained psychological constructs, and build a comprehensive model of the brain? Researchers usually rely on single studies to delineate brain regions recruited by mental processes. They relate their findings to previous works in an informal way by defining regions of interest from the literature. Meta-analysis approaches provide a more principled way to build upon the literature. This thesis investigates three ways to assemble knowledge using activation maps from a large amount of studies. First, we present an approach that uses jointly two similar fMRI experiments, to better condition an analysis from a statistical standpoint. We show that it is a valuable data-driven alternative to traditional regions of interest analyses, but fails to provide a systematic way to relate studies, and thus does not permit to integrate knowledge on a large scale. Because of the difficulty to associate multiple studies, we resort to using a single dataset sampling a large number of stimuli for our second contribution. This method estimates functional networks associated with functional profiles, where the functional networks are interacting brain regions and the functional profiles are a weighted set of cognitive descriptors. This work successfully yields known brain networks and automatically associates meaningful descriptions. Its limitations lie in the unsupervised nature of this method, which is more difficult to validate, and the use of a single dataset. It however brings the notion of cognitive labels, which is central to our last contribution. Our last contribution presents a method that learns functional atlases by combining several datasets. [Henson 2006] shows that forward inference, i.e. the probability of an activation given a cognitive process, is often not sufficient to conclude on the engagement of brain regions for a cognitive process

  16. Multi-element trace analysis of solid samples using one-photon two-step RIMS

    International Nuclear Information System (INIS)

    Telle, H. H.; Abraham, C. J.; Jones, O. R.; Krustev, T.

    1998-01-01

    In this study we have investigated the feasibility of multi-element analysis using a simple 1+1 photo-excitation/photo-ionization scheme. Although such schemes are usually far from ideal for optimum resonance ionization, they are the approach of choice if one wishes to maintain a simple, easy-to-operate laser set-up which is potentially suitable for routine analysis. In addition, we only made use of the second-harmonic tuning range of a single dye. While this limits the range of elements which are accessible in the 1+1 RIS scheme it further adds to the simplicity and allows for automation of sequential multi-element analysis

  17. Large-Scale Analysis Exploring Evolution of Catalytic Machineries and Mechanisms in Enzyme Superfamilies.

    Science.gov (United States)

    Furnham, Nicholas; Dawson, Natalie L; Rahman, Syed A; Thornton, Janet M; Orengo, Christine A

    2016-01-29

    Enzymes, as biological catalysts, form the basis of all forms of life. How these proteins have evolved their functions remains a fundamental question in biology. Over 100 years of detailed biochemistry studies, combined with the large volumes of sequence and protein structural data now available, means that we are able to perform large-scale analyses to address this question. Using a range of computational tools and resources, we have compiled information on all experimentally annotated changes in enzyme function within 379 structurally defined protein domain superfamilies, linking the changes observed in functions during evolution to changes in reaction chemistry. Many superfamilies show changes in function at some level, although one function often dominates one superfamily. We use quantitative measures of changes in reaction chemistry to reveal the various types of chemical changes occurring during evolution and to exemplify these by detailed examples. Additionally, we use structural information of the enzymes active site to examine how different superfamilies have changed their catalytic machinery during evolution. Some superfamilies have changed the reactions they perform without changing catalytic machinery. In others, large changes of enzyme function, in terms of both overall chemistry and substrate specificity, have been brought about by significant changes in catalytic machinery. Interestingly, in some superfamilies, relatives perform similar functions but with different catalytic machineries. This analysis highlights characteristics of functional evolution across a wide range of superfamilies, providing insights that will be useful in predicting the function of uncharacterised sequences and the design of new synthetic enzymes. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Large-scale analysis of intrinsic disorder flavors and associated functions in the protein sequence universe.

    Science.gov (United States)

    Necci, Marco; Piovesan, Damiano; Tosatto, Silvio C E

    2016-12-01

    Intrinsic disorder (ID) in proteins has been extensively described for the last decade; a large-scale classification of ID in proteins is mostly missing. Here, we provide an extensive analysis of ID in the protein universe on the UniProt database derived from sequence-based predictions in MobiDB. Almost half the sequences contain an ID region of at least five residues. About 9% of proteins have a long ID region of over 20 residues which are more abundant in Eukaryotic organisms and most frequently cover less than 20% of the sequence. A small subset of about 67,000 (out of over 80 million) proteins is fully disordered and mostly found in Viruses. Most proteins have only one ID, with short ID evenly distributed along the sequence and long ID overrepresented in the center. The charged residue composition of Das and Pappu was used to classify ID proteins by structural propensities and corresponding functional enrichment. Swollen Coils seem to be used mainly as structural components and in biosynthesis in both Prokaryotes and Eukaryotes. In Bacteria, they are confined in the nucleoid and in Viruses provide DNA binding function. Coils & Hairpins seem to be specialized in ribosome binding and methylation activities. Globules & Tadpoles bind antigens in Eukaryotes but are involved in killing other organisms and cytolysis in Bacteria. The Undefined class is used by Bacteria to bind toxic substances and mediate transport and movement between and within organisms in Viruses. Fully disordered proteins behave similarly, but are enriched for glycine residues and extracellular structures. © 2016 The Protein Society.

  19. Large-scale proteome comparative analysis of developing rhizomes of the ancient vascular plant Equisetum hyemale.

    Directory of Open Access Journals (Sweden)

    Tiago Santana Balbuena

    2012-06-01

    Full Text Available Equisetum hyemale is a widespread vascular plant species, whose reproduction is mainly dependent on the growth and development of the rhizomes. Due to its key evolutionary position, the identification of factors that could be involved in the existence of the rhizomatous trait may contribute to a better understanding of the role of this underground organ for the successful propagation of this and other plant species. In the present work, we characterized the proteome of E. hyemale rhizomes using a GeLC-MS spectral-counting proteomics strategy. A total of 1,911 and 1,860 non-redundant proteins were identified in the rhizomes apical tip and elongation zone, respectively. Rhizome- characteristic proteins were determined by comparisons of the developing rhizome tissues to developing roots. A total of 87 proteins were found to be up-regulated in both E. hyemale rhizome tissues in relation to developing roots. Hierarchical clustering indicated a vast dynamic range in the expression of the 87 characteristic proteins and revealed, based on the expression profile, the existence of 9 major protein groups. Gene ontology analyses suggested an over-representation of the terms involved in macromolecular and protein biosynthetic processes, gene expression and nucleotide and protein binding functions. Spatial differences analysis between the rhizome apical tip and the elongation zone revealed that only eight proteins were up-regulated in the apical tip including RNA-binding proteins and an acyl carrier protein, as well as a KH-domain protein and a T-complex subunit; while only seven proteins were up-regulated in the elongation zone including phosphomannomutase, galactomannan galactosyltransferase, endoglucanase 10 and 25 and mannose-1-phosphate guanyltransferase subunits alpha and beta. This is the first large scale characterization of the proteome of a plant rhizome. Implications of the findings were discussed in relation to other underground organs and related

  20. Evaluation of Large-Scale Public-Sector Reforms: A Comparative Analysis

    Science.gov (United States)

    Breidahl, Karen N.; Gjelstrup, Gunnar; Hansen, Hanne Foss; Hansen, Morten Balle

    2017-01-01

    Research on the evaluation of large-scale public-sector reforms is rare. This article sets out to fill that gap in the evaluation literature and argues that it is of vital importance since the impact of such reforms is considerable and they change the context in which evaluations of other and more delimited policy areas take place. In our…

  1. Experimental and numerical analysis of water hammer in a large-scale PVC pipeline apparatus

    NARCIS (Netherlands)

    Bergant, A.; Hou, Q.; Keramat, A.; Tijsseling, A.S.; Gajic, A.; Benisek, M.; Nedeljkovic, M.

    2011-01-01

    This paper investigates the effects of the pipe-wall viscoelasticity on water-hammer pressures. A large-scale pipeline apparatus made of polyvinyl chloride (PVC) at Deltares, Delft, The Netherlands, has been used to carry out waterhammer experiments. Tests have been conducted in a

  2. A conceptual analysis of standard setting in large-scale assessments

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1994-01-01

    Elements of arbitrariness in the standard setting process are explored, and an alternative to the use of cut scores is presented. The first part of the paper analyzes the use of cut scores in large-scale assessments, discussing three different functions: (1) cut scores define the qualifications used

  3. Multielement analysis of biological standards by neutron activation analysis

    International Nuclear Information System (INIS)

    Nadkarni, R.A.

    1977-01-01

    Up to 28 elements were determined in two IAEA standards: Animal Muscle H4 and Fish Soluble A 6/74, and three NBS standards: Spinach: SRM-1570, Tomato Leaves: SRM-1573 and Pine Needles: SRM-1575 by instrumental neutron-activation analysis. Seven noble metals were determined in two NBS standards: Coal: SRM-1632 and Coal Fly Ash: SRM-1633 by radiochemical procedure while 11 rare earth elements were determined in NBS standard Orchard Leaves: SRM-1571 by instrumental neutron-activation analysis. The results are in good agreement with the certified and/or literature data where available. The irradiations were performed at the Cornell TRIGA Mark II nuclear reactor at a thermal neutron flux of 1-3x10 12 ncm -2 sec -1 . The short-lived species were determined after a 2-minute irradiation in the pneumatic rabbit tube, and the longer-lived species after an 8-hour irradiation in the central thimble facility. The standards and samples were counted on coaxial 56-cm 3 Ge(Li) detector. The system resolution was 1.96 keV (FWHM) with a peak to Compton ratio of 37:1 and counting efficiency of 13%, all compared to the 1.332 MeV photopeak of Co-60. (T.I.)

  4. Validation of multi-element isotope dilution ICPMS for the analysis of basalts

    Energy Technology Data Exchange (ETDEWEB)

    Willbold, M.; Jochum, K.P.; Raczek, I.; Amini, M.A.; Stoll, B.; Hofmann, A.W. [Max-Planck-Institut fuer Chemie, Mainz (Germany)

    2003-09-01

    In this study we have validated a newly developed multi-element isotope dilution (ID) ICPMS method for the simultaneous analysis of up to 12 trace elements in geological samples. By evaluating the analytical uncertainty of individual components using certified reference materials we have quantified the overall analytical uncertainty of the multi-element ID ICPMS method at 1-2%. Individual components include sampling/weighing, purity of reagents, purity of spike solutions, calibration of spikes, determination of isotopic ratios, instrumental sources of error, correction of mass discrimination effect, values of constants, and operator bias. We have used the ID-determined trace elements for internal standardization to improve indirectly the analysis of 14 other (mainly mono-isotopic trace elements) by external calibration. The overall analytical uncertainty for those data is about 2-3%. In addition, we have analyzed USGS and MPI-DING geological reference materials (BHVO-1, BHVO-2, KL2-G, ML3B-G) to quantify the overall bias of the measurement procedure. Trace element analysis of geological reference materials yielded results that agree mostly within about 2-3% relative to the reference values. Since these results match the conclusions obtained by the investigation of the overall analytical uncertainty, we take this as a measure for the validity of multi-element ID ICPMS. (orig.)

  5. Provenance Establishment of Stingless Bee Honey Using Multi-element Analysis in Combination with Chemometrics Techniques.

    Science.gov (United States)

    Shadan, Aidil Fahmi; Mahat, Naji A; Wan Ibrahim, Wan Aini; Ariffin, Zaiton; Ismail, Dzulkiflee

    2018-01-01

    As consumption of stingless bee honey has been gaining popularity in many countries including Malaysia, ability to identify accurately its geographical origin proves pertinent for investigating fraudulent activities for consumer protection. Because a chemical signature can be location-specific, multi-element distribution patterns may prove useful for provenancing such product. Using the inductively coupled-plasma optical emission spectrometer as well as principal component analysis (PCA) and linear discriminant analysis (LDA), the distributions of multi-elements in stingless bee honey collected at four different geographical locations (North, West, East, and South) in Johor, Malaysia, were investigated. While cross-validation using PCA demonstrated 87.0% correct classification rate, the same was improved (96.2%) with the use of LDA, indicating that discrimination was possible for the different geographical regions. Therefore, utilization of multi-element analysis coupled with chemometrics techniques for assigning the provenance of stingless bee honeys for forensic applications is supported. © 2017 American Academy of Forensic Sciences.

  6. Thermal System Analysis and Optimization of Large-Scale Compressed Air Energy Storage (CAES

    Directory of Open Access Journals (Sweden)

    Zhongguang Fu

    2015-08-01

    Full Text Available As an important solution to issues regarding peak load and renewable energy resources on grids, large-scale compressed air energy storage (CAES power generation technology has recently become a popular research topic in the area of large-scale industrial energy storage. At present, the combination of high-expansion ratio turbines with advanced gas turbine technology is an important breakthrough in energy storage technology. In this study, a new gas turbine power generation system is coupled with current CAES technology. Moreover, a thermodynamic cycle system is optimized by calculating for the parameters of a thermodynamic system. Results show that the thermal efficiency of the new system increases by at least 5% over that of the existing system.

  7. APPLICATIONS OF CFD METHOD TO GAS MIXING ANALYSIS IN A LARGE-SCALED TANK

    International Nuclear Information System (INIS)

    Lee, S; Richard Dimenna, R

    2007-01-01

    The computational fluid dynamics (CFD) modeling technique was applied to the estimation of maximum benzene concentration for the vapor space inside a large-scaled and high-level radioactive waste tank at Savannah River site (SRS). The objective of the work was to perform the calculations for the benzene mixing behavior in the vapor space of Tank 48 and its impact on the local concentration of benzene. The calculations were used to evaluate the degree to which purge air mixes with benzene evolving from the liquid surface and its ability to prevent an unacceptable concentration of benzene from forming. The analysis was focused on changing the tank operating conditions to establish internal recirculation and changing the benzene evolution rate from the liquid surface. The model used a three-dimensional momentum coupled with multi-species transport. The calculations included potential operating conditions for air inlet and exhaust flows, recirculation flow rate, and benzene evolution rate with prototypic tank geometry. The flow conditions are assumed to be fully turbulent since Reynolds numbers for typical operating conditions are in the range of 20,000 to 70,000 based on the inlet conditions of the air purge system. A standard two-equation turbulence model was used. The modeling results for the typical gas mixing problems available in the literature were compared and verified through comparisons with the test results. The benchmarking results showed that the predictions are in good agreement with the analytical solutions and literature data. Additional sensitivity calculations included a reduced benzene evolution rate, reduced air inlet and exhaust flow, and forced internal recirculation. The modeling results showed that the vapor space was fairly well mixed and that benzene concentrations were relatively low when forced recirculation and 72 cfm ventilation air through the tank boundary were imposed. For the same 72 cfm air inlet flow but without forced recirculation

  8. Fault Transient Analysis and Protection Performance Evaluation within a Large-scale PV Power Plant

    Directory of Open Access Journals (Sweden)

    Wen Jinghua

    2016-01-01

    Full Text Available In this paper, a short-circuit test within a large-scale PV power plant with a total capacity of 850MWp is discussed. The fault currents supplied by the PV generation units are presented and analysed. According to the fault behaviour, the existing protection coordination principles with the plant are considered and their performances are evaluated. Moreover, these protections are examined in simulation platform under different operating situations. A simple measure with communication system is proposed to deal with the foreseeable problem about the current protection scheme in the PV power plant.

  9. Large-Scale Parallel Finite Element Analysis of the Stress Singular Problems

    International Nuclear Information System (INIS)

    Noriyuki Kushida; Hiroshi Okuda; Genki Yagawa

    2002-01-01

    In this paper, the convergence behavior of large-scale parallel finite element method for the stress singular problems was investigated. The convergence behavior of iterative solvers depends on the efficiency of the pre-conditioners. However, efficiency of pre-conditioners may be influenced by the domain decomposition that is necessary for parallel FEM. In this study the following results were obtained: Conjugate gradient method without preconditioning and the diagonal scaling preconditioned conjugate gradient method were not influenced by the domain decomposition as expected. symmetric successive over relaxation method preconditioned conjugate gradient method converged 6% faster as maximum if the stress singular area was contained in one sub-domain. (authors)

  10. An analysis of Australia's large scale renewable energy target: Restoring market confidence

    International Nuclear Information System (INIS)

    Nelson, Tim; Nelson, James; Ariyaratnam, Jude; Camroux, Simon

    2013-01-01

    In 2001, Australia introduced legislation requiring investment in new renewable electricity generating capacity. The legislation was significantly expanded in 2009 to give effect to a 20% Renewable Energy Target (RET). Importantly, the policy was introduced with bipartisan support and is consistent with global policy trends. In this article, we examine the history of the policy and establish that the ‘stop/start’ nature of renewable policy development has resulted in investors withholding new capital until greater certainty is provided. We utilise the methodology from Simshauser and Nelson (2012) to examine whether capital market efficiency losses would occur under certain policy scenarios. The results show that electricity costs would increase by between $51 million and $119 million if the large-scale RET is abandoned even after accounting for avoided renewable costs. Our conclusions are clear: we find that policymakers should be guided by a high level public policy principle in relation to large-scale renewable energy policy: constant review is not reform. -- Highlights: •We examine the history of Australian renewable energy policy. •We examine whether capital market efficiency losses occur under certain policy scenarios. •We find electricity prices increase by up to $119 million due to renewable policy uncertainty. •We conclude that constant review of policy is not reform and should be avoided

  11. Diffusion Experiments in Opalinus Clay: Laboratory, Large-Scale Diffusion Experiments and Microscale Analysis by RBS.

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Gutierrez, M.; Alonso de los Rios, U.; Missana, T.; Cormenzana, J.L.; Mingarro, M.; Morejon, J.; Gil, P.

    2008-08-06

    The Opalinus Clay (OPA) formation in the Zurcher Weiland (Switzerland) is a potential host rock for a repository for high-level radioactive waste. Samples collected in the Mont Terri Underground Rock Laboratory (URL), where the OPA formation is located at a depth between -200 and -300 m below the surface, were used to study the radionuclide diffusion in clay materials. Classical laboratory essays and a novel experimental set-up for large-scale diffusion experiments were performed together to a novel application of the nuclear ion beam technique Rutherford Backscattering Spectrometry (RBS), to understand the transport properties of the OPA and to enhance the methodologies used for in situ diffusion experiments. Through-Diffusion and In-Diffusion conventional laboratory diffusion experiments were carried out with HTO, 36{sup C}l-, I-, 22{sup N}a, 75{sup S}e, 85{sup S}r, 233{sup U}, 137{sup C}s, 60{sup C}o and 152{sup E}u. Large-scale diffusion experiments were performed with HTO, 36{sup C}l, and 85{sup S}r, and new experiments with 60{sup C}o, 137{sup C}s and 152{sup E}u are ongoing. Diffusion experiments with RBS technique were done with Sr, Re, U and Eu. (Author) 38 refs.

  12. Analysis on the Critical Rainfall Value For Predicting Large Scale Landslides Caused by Heavy Rainfall In Taiwan.

    Science.gov (United States)

    Tsai, Kuang-Jung; Chiang, Jie-Lun; Lee, Ming-Hsi; Chen, Yie-Ruey

    2017-04-01

    Analysis on the Critical Rainfall Value For Predicting Large Scale Landslides Caused by Heavy Rainfall In Taiwan. Kuang-Jung Tsai 1, Jie-Lun Chiang 2,Ming-Hsi Lee 2, Yie-Ruey Chen 1, 1Department of Land Management and Development, Chang Jung Christian Universityt, Tainan, Taiwan. 2Department of Soil and Water Conservation, National Pingtung University of Science and Technology, Pingtung, Taiwan. ABSTRACT The accumulated rainfall amount was recorded more than 2,900mm that were brought by Morakot typhoon in August, 2009 within continuous 3 days. Very serious landslides, and sediment related disasters were induced by this heavy rainfall event. The satellite image analysis project conducted by Soil and Water Conservation Bureau after Morakot event indicated that more than 10,904 sites of landslide with total sliding area of 18,113ha were found by this project. At the same time, all severe sediment related disaster areas are also characterized based on their disaster type, scale, topography, major bedrock formations and geologic structures during the period of extremely heavy rainfall events occurred at the southern Taiwan. Characteristics and mechanism of large scale landslide are collected on the basis of the field investigation technology integrated with GPS/GIS/RS technique. In order to decrease the risk of large scale landslides on slope land, the strategy of slope land conservation, and critical rainfall database should be set up and executed as soon as possible. Meanwhile, study on the establishment of critical rainfall value used for predicting large scale landslides induced by heavy rainfall become an important issue which was seriously concerned by the government and all people live in Taiwan. The mechanism of large scale landslide, rainfall frequency analysis ,sediment budge estimation and river hydraulic analysis under the condition of extremely climate change during the past 10 years would be seriously concerned and recognized as a required issue by this

  13. Simultaneous multi-element analysis of some edible pulses using neutron activation analysis

    International Nuclear Information System (INIS)

    El-Sweify, F.H.; Metwally, E.; Abdel-Khalik, H.

    2007-01-01

    This paper comprises the application of instrumental neutron activation analysis (INAA) for multi-element determination in some edible pulse samples. These edible pulses are usually daily used in the Egyptian kitchen. These were: anise, cumin, coriander, caraway, black cumin, white kidney bean, lupine, lentil, chickpea, broad bean, peanut, almond, and fenugreek. The pulses have been analyzed as dehulled pulses, in the case of legume and oil pulses with simultaneous analysis of their respective skins. The determined elements were: Ce, Co, Cr, Cs, Eu, Fe, Hf, Rb, Sb, Sc, Sr, Th and Zn. The element content in the dehulled pulses and their respective skins has been compared. Some elements were major or minor elements where others were trace elements. Standard reference materials were used to assure quality control, accuracy and precision of the technique. (author)

  14. Performance analysis on a large scale borehole ground source heat pump in Tianjin cultural centre

    Science.gov (United States)

    Yin, Baoquan; Wu, Xiaoting

    2018-02-01

    In this paper, the temperature distribution of the geothermal field for the vertical borehole ground-coupled heat pump was tested and analysed. Besides the borehole ground-coupled heat pump, the system composed of the ice storage, heat supply network and cooling tower. According to the operation data for nearly three years, the temperature constant zone is in the ground depth of 40m -120m with a temperature gradient of about 3.0°C/100m. The temperature of the soil dropped significantly in the heating season, increased significantly in the cooling season, and reinstated in the transitional season. With the energy balance design of the heating and cooling and the existence of the soil thermal inertia, the soil temperature stayed in a relative stable range and the ground source heat pump system was operated with a relative high efficiency. The geothermal source heat pump was shown to be applicable for large scale utilization.

  15. The causality analysis of climate change and large-scale human crisis.

    Science.gov (United States)

    Zhang, David D; Lee, Harry F; Wang, Cong; Li, Baosheng; Pei, Qing; Zhang, Jane; An, Yulun

    2011-10-18

    Recent studies have shown strong temporal correlations between past climate changes and societal crises. However, the specific causal mechanisms underlying this relation have not been addressed. We explored quantitative responses of 14 fine-grained agro-ecological, socioeconomic, and demographic variables to climate fluctuations from A.D. 1500-1800 in Europe. Results show that cooling from A.D. 1560-1660 caused successive agro-ecological, socioeconomic, and demographic catastrophes, leading to the General Crisis of the Seventeenth Century. We identified a set of causal linkages between climate change and human crisis. Using temperature data and climate-driven economic variables, we simulated the alternation of defined "golden" and "dark" ages in Europe and the Northern Hemisphere during the past millennium. Our findings indicate that climate change was the ultimate cause, and climate-driven economic downturn was the direct cause, of large-scale human crises in preindustrial Europe and the Northern Hemisphere.

  16. The analysis of MAI in large scale MIMO-CDMA system

    Science.gov (United States)

    Berceanu, Madalina-Georgiana; Voicu, Carmen; Halunga, Simona

    2016-12-01

    Recently, technological development imposed a rapid growth in the use of data carried by cellular services, which also implies the necessity of higher data rates and lower latency. To meet the users' demands, it was brought into discussion a series of new data processing techniques. In this paper, we approached the MIMO technology that uses multiple antennas at the receiver and transmitter ends. To study the performances obtained by this technology, we proposed a MIMO-CDMA system, where image transmission has been used instead of random data transmission to take benefit of a larger range of quality indicators. In the simulations we increased the number of antennas, we observed how the performances of the system are modified and, based on that, we were able to make a comparison between a conventional MIMO and a Large Scale MIMO system, in terms of BER and MSSIM index, which is a metric that compares the quality of the image before transmission with the received one.

  17. An Axiomatic Analysis Approach for Large-Scale Disaster-Tolerant Systems Modeling

    Directory of Open Access Journals (Sweden)

    Theodore W. Manikas

    2011-02-01

    Full Text Available Disaster tolerance in computing and communications systems refers to the ability to maintain a degree of functionality throughout the occurrence of a disaster. We accomplish the incorporation of disaster tolerance within a system by simulating various threats to the system operation and identifying areas for system redesign. Unfortunately, extremely large systems are not amenable to comprehensive simulation studies due to the large computational complexity requirements. To address this limitation, an axiomatic approach that decomposes a large-scale system into smaller subsystems is developed that allows the subsystems to be independently modeled. This approach is implemented using a data communications network system example. The results indicate that the decomposition approach produces simulation responses that are similar to the full system approach, but with greatly reduced simulation time.

  18. Survey and analysis of selected jointly owned large-scale electric utility storage projects

    Energy Technology Data Exchange (ETDEWEB)

    1982-05-01

    The objective of this study was to examine and document the issues surrounding the curtailment in commercialization of large-scale electric storage projects. It was sensed that if these issues could be uncovered, then efforts might be directed toward clearing away these barriers and allowing these technologies to penetrate the market to their maximum potential. Joint-ownership of these projects was seen as a possible solution to overcoming the major barriers, particularly economic barriers, of commercializaton. Therefore, discussions with partners involved in four pumped storage projects took place to identify the difficulties and advantages of joint-ownership agreements. The four plants surveyed included Yards Creek (Public Service Electric and Gas and Jersey Central Power and Light); Seneca (Pennsylvania Electric and Cleveland Electric Illuminating Company); Ludington (Consumers Power and Detroit Edison, and Bath County (Virginia Electric Power Company and Allegheny Power System, Inc.). Also investigated were several pumped storage projects which were never completed. These included Blue Ridge (American Electric Power); Cornwall (Consolidated Edison); Davis (Allegheny Power System, Inc.) and Kttatiny Mountain (General Public Utilities). Institutional, regulatory, technical, environmental, economic, and special issues at each project were investgated, and the conclusions relative to each issue are presented. The major barriers preventing the growth of energy storage are the high cost of these systems in times of extremely high cost of capital, diminishing load growth and regulatory influences which will not allow the building of large-scale storage systems due to environmental objections or other reasons. However, the future for energy storage looks viable despite difficult economic times for the utility industry. Joint-ownership can ease some of the economic hardships for utilites which demonstrate a need for energy storage.

  19. Multi-element neutron activation analysis and solution of classification problems using multidimensional statistics

    International Nuclear Information System (INIS)

    Vaganov, P.A.; Kol'tsov, A.A.; Kulikov, V.D.; Mejer, V.A.

    1983-01-01

    The multi-element instrumental neutron activation analysis of samples of mountain rocks (sandstones, aleurolites and shales of one of gold deposits) is performed. The spectra of irradiated samples are measured by Ge(Li) detector of the volume of 35 mm 3 . The content of 22 chemical elements is determined in each sample. The results of analysis serve as reliable basis for multi-dimensional statistic information processing, they constitute the basis for the generalized characteristics of rocks which brings about the solution of classification problem for rocks of different deposits

  20. Large-scale deployment of electric taxis in Beijing: A real-world analysis

    International Nuclear Information System (INIS)

    Zou, Yuan; Wei, Shouyang; Sun, Fengchun; Hu, Xiaosong; Shiao, Yaojung

    2016-01-01

    The national and municipal government of China enacted a series of regulations and policies to stimulate/promote the development of new energy vehicles, in order to mitigate the increasingly serious carbon emissions, environmental pollution, and energy shortage. As a large metropolitan and populated city subject to the notorious air pollution, Beijing has been making a remarkable progress in the large-scale demonstration of new energy vehicles in recent years, which could result in a significant impact on both transport and electricity sectors. As a result, there is an urgent necessity to study the characteristics of the large-scale new energy vehicles adoption for a deep understanding of operational status (e.g., energy consumption and battery charging patterns) and benefits, as well as charging facilities. Based on the operational data collected from realistic electric-taxi demonstration in Beijing, the driver behavior and charging characteristics are examined in this paper. The energy consumption and efficiency of two representative electric-taxi platforms are compared, and the influence of the driving schedules is discussed. The results show that the average driving distance per day of these electric taxes is 117.98 km, and 92% of drivers recharge their cars twice per day. Further study shows that the drivers make two trips per day, and the two peaks in the distribution of departure and arrival times coincide with the rush hour in the morning and evening. The taxi recharge duration is largely influenced by the charging power. Generally, the associated battery SOC (state of charge) swing is between 40% and 100%. By evaluating the energy consumption of 282 trips recorded in 2013 and 2014, we find that the two platforms have similar energy efficiency. The micro-trips method is utilized to probe the correlation of energy consumption and average speed. - Highlights: • Electric taxis' driver behavior and charging characteristics is analyzed based on operation data

  1. ANALYSIS OF RADAR AND OPTICAL SPACE BORNE DATA FOR LARGE SCALE TOPOGRAPHICAL MAPPING

    Directory of Open Access Journals (Sweden)

    W. Tampubolon

    2015-03-01

    Full Text Available Normally, in order to provide high resolution 3 Dimension (3D geospatial data, large scale topographical mapping needs input from conventional airborne campaigns which are in Indonesia bureaucratically complicated especially during legal administration procedures i.e. security clearance from military/defense ministry. This often causes additional time delays besides technical constraints such as weather and limited aircraft availability for airborne campaigns. Of course the geospatial data quality is an important issue for many applications. The increasing demand of geospatial data nowadays consequently requires high resolution datasets as well as a sufficient level of accuracy. Therefore an integration of different technologies is required in many cases to gain the expected result especially in the context of disaster preparedness and emergency response. Another important issue in this context is the fast delivery of relevant data which is expressed by the term “Rapid Mapping”. In this paper we present first results of an on-going research to integrate different data sources like space borne radar and optical platforms. Initially the orthorectification of Very High Resolution Satellite (VHRS imagery i.e. SPOT-6 has been done as a continuous process to the DEM generation using TerraSAR-X/TanDEM-X data. The role of Ground Control Points (GCPs from GNSS surveys is mandatory in order to fulfil geometrical accuracy. In addition, this research aims on providing suitable processing algorithm of space borne data for large scale topographical mapping as described in section 3.2. Recently, radar space borne data has been used for the medium scale topographical mapping e.g. for 1:50.000 map scale in Indonesian territories. The goal of this on-going research is to increase the accuracy of remote sensing data by different activities, e.g. the integration of different data sources (optical and radar or the usage of the GCPs in both, the optical and the

  2. A New Perspective on Polyploid Fragaria (Strawberry) Genome Composition Based on Large-Scale, Multi-Locus Phylogenetic Analysis

    OpenAIRE

    Yang, Yilong; Davis, Thomas M

    2017-01-01

    Abstract The subgenomic compositions of the octoploid (2n = 8× = 56) strawberry (Fragaria) species, including the economically important cultivated species Fragaria x ananassa, have been a topic of long-standing interest. Phylogenomic approaches utilizing next-generation sequencing technologies offer a new window into species relationships and the subgenomic compositions of polyploids. We have conducted a large-scale phylogenetic analysis of Fragaria (strawberry) species using the Fluidigm Ac...

  3. Visual analysis of inter-process communication for large-scale parallel computing.

    Science.gov (United States)

    Muelder, Chris; Gygi, Francois; Ma, Kwan-Liu

    2009-01-01

    In serial computation, program profiling is often helpful for optimization of key sections of code. When moving to parallel computation, not only does the code execution need to be considered but also communication between the different processes which can induce delays that are detrimental to performance. As the number of processes increases, so does the impact of the communication delays on performance. For large-scale parallel applications, it is critical to understand how the communication impacts performance in order to make the code more efficient. There are several tools available for visualizing program execution and communications on parallel systems. These tools generally provide either views which statistically summarize the entire program execution or process-centric views. However, process-centric visualizations do not scale well as the number of processes gets very large. In particular, the most common representation of parallel processes is a Gantt char t with a row for each process. As the number of processes increases, these charts can become difficult to work with and can even exceed screen resolution. We propose a new visualization approach that affords more scalability and then demonstrate it on systems running with up to 16,384 processes.

  4. Three-Dimensional Thermo Fluid Analysis of Large Scale Electric Motor

    Directory of Open Access Journals (Sweden)

    Debasish Biswas

    2000-01-01

    Full Text Available In the present work, the flow and temperature fields in large scale rotating electric motor are studied by solving the Navier–Stokes equations along with the temperature equation on the basis of finite difference method. All the equations are written in terms of relative velocity with respect to the rotating frame of reference. Generalized coordinate system is used so that sufficient grid resolution could be achieved in the body surface boundary layer region. Differential terms with respect to time are approximated by forward differences, diffusion terms are approximated by the implicit Euler form, convection terms in the Navier–Stokes equations are approximated by the third order upwind difference scheme. The results of calculation led to a good understanding of the flow behavior, namely, the rotating cavity flow in between the supporting bar of the motor, the flow stagnation and region of temperature rise due to flow stagnation, etc. Also the measured average temperature of the motor coil wall is predicted quite satisfactorily.

  5. The topology of large-scale structure. III. Analysis of observations

    International Nuclear Information System (INIS)

    Gott, J.R. III; Weinberg, D.H.; Miller, J.; Thuan, T.X.; Schneider, S.E.

    1989-01-01

    A recently developed algorithm for quantitatively measuring the topology of large-scale structures in the universe was applied to a number of important observational data sets. The data sets included an Abell (1958) cluster sample out to Vmax = 22,600 km/sec, the Giovanelli and Haynes (1985) sample out to Vmax = 11,800 km/sec, the CfA sample out to Vmax = 5000 km/sec, the Thuan and Schneider (1988) dwarf sample out to Vmax = 3000 km/sec, and the Tully (1987) sample out to Vmax = 3000 km/sec. It was found that, when the topology is studied on smoothing scales significantly larger than the correlation length (i.e., smoothing length, lambda, not below 1200 km/sec), the topology is spongelike and is consistent with the standard model in which the structure seen today has grown from small fluctuations caused by random noise in the early universe. When the topology is studied on the scale of lambda of about 600 km/sec, a small shift is observed in the genus curve in the direction of a meatball topology. 66 refs

  6. The topology of large-scale structure. III - Analysis of observations

    Science.gov (United States)

    Gott, J. Richard, III; Miller, John; Thuan, Trinh X.; Schneider, Stephen E.; Weinberg, David H.; Gammie, Charles; Polk, Kevin; Vogeley, Michael; Jeffrey, Scott; Bhavsar, Suketu P.; Melott, Adrian L.; Giovanelli, Riccardo; Hayes, Martha P.; Tully, R. Brent; Hamilton, Andrew J. S.

    1989-05-01

    A recently developed algorithm for quantitatively measuring the topology of large-scale structures in the universe was applied to a number of important observational data sets. The data sets included an Abell (1958) cluster sample out to Vmax = 22,600 km/sec, the Giovanelli and Haynes (1985) sample out to Vmax = 11,800 km/sec, the CfA sample out to Vmax = 5000 km/sec, the Thuan and Schneider (1988) dwarf sample out to Vmax = 3000 km/sec, and the Tully (1987) sample out to Vmax = 3000 km/sec. It was found that, when the topology is studied on smoothing scales significantly larger than the correlation length (i.e., smoothing length, lambda, not below 1200 km/sec), the topology is spongelike and is consistent with the standard model in which the structure seen today has grown from small fluctuations caused by random noise in the early universe. When the topology is studied on the scale of lambda of about 600 km/sec, a small shift is observed in the genus curve in the direction of a 'meatball' topology.

  7. The topology of large-scale structure. III - Analysis of observations. [in universe

    Science.gov (United States)

    Gott, J. Richard, III; Weinberg, David H.; Miller, John; Thuan, Trinh X.; Schneider, Stephen E.

    1989-01-01

    A recently developed algorithm for quantitatively measuring the topology of large-scale structures in the universe was applied to a number of important observational data sets. The data sets included an Abell (1958) cluster sample out to Vmax = 22,600 km/sec, the Giovanelli and Haynes (1985) sample out to Vmax = 11,800 km/sec, the CfA sample out to Vmax = 5000 km/sec, the Thuan and Schneider (1988) dwarf sample out to Vmax = 3000 km/sec, and the Tully (1987) sample out to Vmax = 3000 km/sec. It was found that, when the topology is studied on smoothing scales significantly larger than the correlation length (i.e., smoothing length, lambda, not below 1200 km/sec), the topology is spongelike and is consistent with the standard model in which the structure seen today has grown from small fluctuations caused by random noise in the early universe. When the topology is studied on the scale of lambda of about 600 km/sec, a small shift is observed in the genus curve in the direction of a 'meatball' topology.

  8. Large-Scale Analysis of Auditory Segregation Behavior Crowdsourced via a Smartphone App.

    Directory of Open Access Journals (Sweden)

    Sundeep Teki

    Full Text Available The human auditory system is adept at detecting sound sources of interest from a complex mixture of several other simultaneous sounds. The ability to selectively attend to the speech of one speaker whilst ignoring other speakers and background noise is of vital biological significance-the capacity to make sense of complex 'auditory scenes' is significantly impaired in aging populations as well as those with hearing loss. We investigated this problem by designing a synthetic signal, termed the 'stochastic figure-ground' stimulus that captures essential aspects of complex sounds in the natural environment. Previously, we showed that under controlled laboratory conditions, young listeners sampled from the university subject pool (n = 10 performed very well in detecting targets embedded in the stochastic figure-ground signal. Here, we presented a modified version of this cocktail party paradigm as a 'game' featured in a smartphone app (The Great Brain Experiment and obtained data from a large population with diverse demographical patterns (n = 5148. Despite differences in paradigms and experimental settings, the observed target-detection performance by users of the app was robust and consistent with our previous results from the psychophysical study. Our results highlight the potential use of smartphone apps in capturing robust large-scale auditory behavioral data from normal healthy volunteers, which can also be extended to study auditory deficits in clinical populations with hearing impairments and central auditory disorders.

  9. Large-Scale Analysis of Auditory Segregation Behavior Crowdsourced via a Smartphone App.

    Science.gov (United States)

    Teki, Sundeep; Kumar, Sukhbinder; Griffiths, Timothy D

    2016-01-01

    The human auditory system is adept at detecting sound sources of interest from a complex mixture of several other simultaneous sounds. The ability to selectively attend to the speech of one speaker whilst ignoring other speakers and background noise is of vital biological significance-the capacity to make sense of complex 'auditory scenes' is significantly impaired in aging populations as well as those with hearing loss. We investigated this problem by designing a synthetic signal, termed the 'stochastic figure-ground' stimulus that captures essential aspects of complex sounds in the natural environment. Previously, we showed that under controlled laboratory conditions, young listeners sampled from the university subject pool (n = 10) performed very well in detecting targets embedded in the stochastic figure-ground signal. Here, we presented a modified version of this cocktail party paradigm as a 'game' featured in a smartphone app (The Great Brain Experiment) and obtained data from a large population with diverse demographical patterns (n = 5148). Despite differences in paradigms and experimental settings, the observed target-detection performance by users of the app was robust and consistent with our previous results from the psychophysical study. Our results highlight the potential use of smartphone apps in capturing robust large-scale auditory behavioral data from normal healthy volunteers, which can also be extended to study auditory deficits in clinical populations with hearing impairments and central auditory disorders.

  10. Calculation and characteristics analysis of blade pitch loads for large scale wind turbines

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Based on the electric pitch system of large scale horizontal-axis wind turbines,the blade pitch loads coming mainly from centrifugal force,aerodynamic force and gravity are analyzed,and the calculation models for them are established in this paper.For illustration,a 1.2 MW wind turbine is introduced as a practical sample,and its blade pitch loads from centrifugal force,aerodynamic force and gravity are calculated and analyzed separately and synthetically.The research results showed that in the process of rotor rotating 360o,the fluctuation of blade pitch loads is similar to cosine curve when the rotor rotational speed,in-flow wind speed and pitch angle are constant.Furthermore,the amplitude of blade pitch load presents quite a difference at a different pitch angle.The ways of calculation for blade pitch loads are of the universality,and are helpful for further research of the individual pitch control system.

  11. A large-scale analysis of sex differences in facial expressions.

    Directory of Open Access Journals (Sweden)

    Daniel McDuff

    Full Text Available There exists a stereotype that women are more expressive than men; however, research has almost exclusively focused on a single facial behavior, smiling. A large-scale study examines whether women are consistently more expressive than men or whether the effects are dependent on the emotion expressed. Studies of gender differences in expressivity have been somewhat restricted to data collected in lab settings or which required labor-intensive manual coding. In the present study, we analyze gender differences in facial behaviors as over 2,000 viewers watch a set of video advertisements in their home environments. The facial responses were recorded using participants' own webcams. Using a new automated facial coding technology we coded facial activity. We find that women are not universally more expressive across all facial actions. Nor are they more expressive in all positive valence actions and less expressive in all negative valence actions. It appears that generally women express actions more frequently than men, and in particular express more positive valence actions. However, expressiveness is not greater in women for all negative valence actions and is dependent on the discrete emotional state.

  12. Methodology for Design and Analysis of Reactive Distillation Involving Multielement Systems

    DEFF Research Database (Denmark)

    Jantharasuk, Amnart; Gani, Rafiqul; Górak, Andrzej

    2011-01-01

    A new methodology for design and analysis of reactive distillation has been developed. In this work, the elementbased approach, coupled with a driving force diagram, has been extended and applied to the design of a reactive distillation column involving multielement (multicomponent) systems...... consisting of two components. Based on this methodology, an optimal design configuration is identified using the equivalent binary-element-driving force diagram. Two case studies of methyl acetate (MeOAc) synthesis and methyl-tert-butyl ether (MTBE) synthesis have been considered to demonstrate...... the successful applications of the methodology. Moreover, energy requirements for various column configurations corresponding to different feed locatio...

  13. Analysis of an HTS coil for large scale superconducting magnetic energy storage

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Ji Young; Lee, Se Yeon; Choi, Kyeong Dal; Park, Sang Ho; Hong, Gye Won; Kim, Sung Soo; Kim, Woo Seok [Korea Polytechnic University, Siheung (Korea, Republic of); Lee, Ji Kwang [Woosuk University, Wanju (Korea, Republic of)

    2015-06-15

    It has been well known that a toroid is the inevitable shape for a high temperature superconducting (HTS) coil as a component of a large scale superconducting magnetic energy storage system (SMES) because it is the best option to minimize a magnetic field intensity applied perpendicularly to the HTS wires. Even though a perfect toroid coil does not have a perpendicular magnetic field, for a practical toroid coil composed of many HTS pancake coils, some type of perpendicular magnetic field cannot be avoided, which is a major cause of degradation of the HTS wires. In order to suggest an optimum design solution for an HTS SMES system, we need an accurate, fast, and effective calculation for the magnetic field, mechanical stresses, and stored energy. As a calculation method for these criteria, a numerical calculation such as an finite element method (FEM) has usually been adopted. However, a 3-dimensional FEM can involve complicated calculation and can be relatively time consuming, which leads to very inefficient iterations for an optimal design process. In this paper, we suggested an intuitive and effective way to determine the maximum magnetic field intensity in the HTS coil by using an analytic and statistical calculation method. We were able to achieve a remarkable reduction of the calculation time by using this method. The calculation results using this method for sample model coils were compared with those obtained by conventional numerical method to verify the accuracy and availability of this proposed method. After the successful substitution of this calculation method for the proposed design program, a similar method of determining the maximum mechanical stress in the HTS coil will also be studied as a future work.

  14. TIMPs of parasitic helminths - a large-scale analysis of high-throughput sequence datasets.

    Science.gov (United States)

    Cantacessi, Cinzia; Hofmann, Andreas; Pickering, Darren; Navarro, Severine; Mitreva, Makedonka; Loukas, Alex

    2013-05-30

    Tissue inhibitors of metalloproteases (TIMPs) are a multifunctional family of proteins that orchestrate extracellular matrix turnover, tissue remodelling and other cellular processes. In parasitic helminths, such as hookworms, TIMPs have been proposed to play key roles in the host-parasite interplay, including invasion of and establishment in the vertebrate animal hosts. Currently, knowledge of helminth TIMPs is limited to a small number of studies on canine hookworms, whereas no information is available on the occurrence of TIMPs in other parasitic helminths causing neglected diseases. In the present study, we conducted a large-scale investigation of TIMP proteins of a range of neglected human parasites including the hookworm Necator americanus, the roundworm Ascaris suum, the liver flukes Clonorchis sinensis and Opisthorchis viverrini, as well as the schistosome blood flukes. This entailed mining available transcriptomic and/or genomic sequence datasets for the presence of homologues of known TIMPs, predicting secondary structures of defined protein sequences, systematic phylogenetic analyses and assessment of differential expression of genes encoding putative TIMPs in the developmental stages of A. suum, N. americanus and Schistosoma haematobium which infect the mammalian hosts. A total of 15 protein sequences with high homology to known eukaryotic TIMPs were predicted from the complement of sequence data available for parasitic helminths and subjected to in-depth bioinformatic analyses. Supported by the availability of gene manipulation technologies such as RNA interference and/or transgenesis, this work provides a basis for future functional explorations of helminth TIMPs and, in particular, of their role/s in fundamental biological pathways linked to long-term establishment in the vertebrate hosts, with a view towards the development of novel approaches for the control of neglected helminthiases.

  15. Energy Analysis of Cascade Heating with High Back-Pressure Large-Scale Steam Turbine

    Directory of Open Access Journals (Sweden)

    Zhihua Ge

    2018-01-01

    Full Text Available To reduce the exergy loss that is caused by the high-grade extraction steam of traditional heating mode of combined heat and power (CHP generating unit, a high back-pressure cascade heating technology for two jointly constructed large-scale steam turbine power generating units is proposed. The Unit 1 makes full use of the exhaust steam heat from high back-pressure turbine, and the Unit 2 uses the original heating mode of extracting steam condensation, which significantly reduces the flow rate of high-grade extraction steam. The typical 2 × 350 MW supercritical CHP units in northern China were selected as object. The boundary conditions for heating were determined based on the actual climatic conditions and heating demands. A model to analyze the performance of the high back-pressure cascade heating supply units for off-design operating conditions was developed. The load distributions between high back-pressure exhaust steam direct supply and extraction steam heating supply were described under various conditions, based on which, the heating efficiency of the CHP units with the high back-pressure cascade heating system was analyzed. The design heating load and maximum heating supply load were determined as well. The results indicate that the average coal consumption rate during the heating season is 205.46 g/kWh for the design heating load after the retrofit, which is about 51.99 g/kWh lower than that of the traditional heating mode. The coal consumption rate of 199.07 g/kWh can be achieved for the maximum heating load. Significant energy saving and CO2 emission reduction are obtained.

  16. Analysis of an HTS coil for large scale superconducting magnetic energy storage

    International Nuclear Information System (INIS)

    Lee, Ji Young; Lee, Se Yeon; Choi, Kyeong Dal; Park, Sang Ho; Hong, Gye Won; Kim, Sung Soo; Kim, Woo Seok; Lee, Ji Kwang

    2015-01-01

    It has been well known that a toroid is the inevitable shape for a high temperature superconducting (HTS) coil as a component of a large scale superconducting magnetic energy storage system (SMES) because it is the best option to minimize a magnetic field intensity applied perpendicularly to the HTS wires. Even though a perfect toroid coil does not have a perpendicular magnetic field, for a practical toroid coil composed of many HTS pancake coils, some type of perpendicular magnetic field cannot be avoided, which is a major cause of degradation of the HTS wires. In order to suggest an optimum design solution for an HTS SMES system, we need an accurate, fast, and effective calculation for the magnetic field, mechanical stresses, and stored energy. As a calculation method for these criteria, a numerical calculation such as an finite element method (FEM) has usually been adopted. However, a 3-dimensional FEM can involve complicated calculation and can be relatively time consuming, which leads to very inefficient iterations for an optimal design process. In this paper, we suggested an intuitive and effective way to determine the maximum magnetic field intensity in the HTS coil by using an analytic and statistical calculation method. We were able to achieve a remarkable reduction of the calculation time by using this method. The calculation results using this method for sample model coils were compared with those obtained by conventional numerical method to verify the accuracy and availability of this proposed method. After the successful substitution of this calculation method for the proposed design program, a similar method of determining the maximum mechanical stress in the HTS coil will also be studied as a future work

  17. Large-scale protein-protein interaction analysis in Arabidopsis mesophyll protoplasts by split firefly luciferase complementation.

    Science.gov (United States)

    Li, Jian-Feng; Bush, Jenifer; Xiong, Yan; Li, Lei; McCormack, Matthew

    2011-01-01

    Protein-protein interactions (PPIs) constitute the regulatory network that coordinates diverse cellular functions. There are growing needs in plant research for creating protein interaction maps behind complex cellular processes and at a systems biology level. However, only a few approaches have been successfully used for large-scale surveys of PPIs in plants, each having advantages and disadvantages. Here we present split firefly luciferase complementation (SFLC) as a highly sensitive and noninvasive technique for in planta PPI investigation. In this assay, the separate halves of a firefly luciferase can come into close proximity and transiently restore its catalytic activity only when their fusion partners, namely the two proteins of interest, interact with each other. This assay was conferred with quantitativeness and high throughput potential when the Arabidopsis mesophyll protoplast system and a microplate luminometer were employed for protein expression and luciferase measurement, respectively. Using the SFLC assay, we could monitor the dynamics of rapamycin-induced and ascomycin-disrupted interaction between Arabidopsis FRB and human FKBP proteins in a near real-time manner. As a proof of concept for large-scale PPI survey, we further applied the SFLC assay to testing 132 binary PPIs among 8 auxin response factors (ARFs) and 12 Aux/IAA proteins from Arabidopsis. Our results demonstrated that the SFLC assay is ideal for in vivo quantitative PPI analysis in plant cells and is particularly powerful for large-scale binary PPI screens.

  18. Large-scale protein-protein interaction analysis in Arabidopsis mesophyll protoplasts by split firefly luciferase complementation.

    Directory of Open Access Journals (Sweden)

    Jian-Feng Li

    Full Text Available Protein-protein interactions (PPIs constitute the regulatory network that coordinates diverse cellular functions. There are growing needs in plant research for creating protein interaction maps behind complex cellular processes and at a systems biology level. However, only a few approaches have been successfully used for large-scale surveys of PPIs in plants, each having advantages and disadvantages. Here we present split firefly luciferase complementation (SFLC as a highly sensitive and noninvasive technique for in planta PPI investigation. In this assay, the separate halves of a firefly luciferase can come into close proximity and transiently restore its catalytic activity only when their fusion partners, namely the two proteins of interest, interact with each other. This assay was conferred with quantitativeness and high throughput potential when the Arabidopsis mesophyll protoplast system and a microplate luminometer were employed for protein expression and luciferase measurement, respectively. Using the SFLC assay, we could monitor the dynamics of rapamycin-induced and ascomycin-disrupted interaction between Arabidopsis FRB and human FKBP proteins in a near real-time manner. As a proof of concept for large-scale PPI survey, we further applied the SFLC assay to testing 132 binary PPIs among 8 auxin response factors (ARFs and 12 Aux/IAA proteins from Arabidopsis. Our results demonstrated that the SFLC assay is ideal for in vivo quantitative PPI analysis in plant cells and is particularly powerful for large-scale binary PPI screens.

  19. Simultaneous analysis of multielement in Ni-electrolyte by TXRF

    International Nuclear Information System (INIS)

    Yuhong, T.; Kai, L.; Guoli, M.

    2000-01-01

    The TXRF technique is applied in our lab to analyze the sample consist of multi elements (from Na to U). Two peculiar ways, chemical and electronical, were established with high efficiency to determine the content of Fe, Co, Ni, Cu, Zn, Pb in the range of 10 2 to 10 1 μg/mL in Ni-base electrolyte in which Ni comes to about 8 x 10 4 μg/mL. Before analysis a step of reducing of Ni content was applied to decrease measurement error of trace elements in the electrolyte. The total time needed for analysis is less than 20 min. We have built a set of TXRF spectrometer on our selves. The system stands out for its short optic path, high sensitivity, low power consuming and full automation. The detection limit comes up to the order of ppb, a powerful analysis software with high flexibility, friend interface is developed by our team on the platform of Win98 under which the control system also works. Key Word TXRF spectrometer, Ni-base electrolyte, multi element analysis. (author)

  20. Multielement analysis of foods and related materials by NAA

    International Nuclear Information System (INIS)

    Cunningham, W.C.; Anderson, D.L.

    1992-01-01

    This paper presents FDA's use of prompt- and delayed-gamma thermal neutron activation analysis (PGAA and INAA, respectively), collectively referred to here is NAA, for the analysis of foods. Several elements of nutritional or toxicological importance can be simultaneously determined at levels ranging from trace to percent. Concentrations of aluminum, boron, bromine, calcium, chlorine, hydrogen, potassium, magnesium, manganese, nitrogen, sodium, and sulfur can be determined in < 1 day in most foods. For INAA, after a few weeks of decay following irradiation, cobalt, cesium, iron, rubidium, scandium, and zinc can also be determined. Other elements that are detectable in only some food types include cadmium, chromium, copper, iodine, phosphorus, antimony, selenium, titanium, and vanadium

  1. Multielement analysis of interplanetary dust particles using TOF-SIMS

    Science.gov (United States)

    Stephan, T.; Kloeck, W.; Jessberger, E. K.; Rulle, H.; Zehnpfenning, J.

    1993-01-01

    Sections of three stratospheric particles (U2015G1, W7029*A27, and L2005P9) were analyzed with TOF-SIMS (Time Of Flight-Secondary Ion Mass Spectrometry) continuing our efforts to investigate the element distribution in interplanetary dust particles (IDP's) with high lateral resolution (approximately 0.2 micron), to examine possible atmospheric contamination effects, and to further explore the abilities of this technique for element analysis of small samples. The samples, previously investigated with SXRF (synchrotron X-ray fluorescence analysis), are highly enriched in Br (Br/Fe: 59 x CI, 9.2 x CI, and 116 x CI, respectively). U2015G1 is the IDP with the by far highest Zn/Fe-ratio (81 x CI) ever reported in chondritic particles.

  2. Multi-element analysis of small biological samples

    International Nuclear Information System (INIS)

    Rokita, E.; Cafmeyer, J.; Maenhaut, W.

    1983-01-01

    A method combining PIXE and INAA was developed to determine the elemental composition of small biological samples. The method needs virtually no sample preparation and less than 1 mg is sufficient for the analysis. The method was used for determining up to 18 elements in leaves taken from Cracow Herbaceous. The factors which influence the elemental composition of leaves and the possible use of leaves as an environmental pollution indicator are discussed

  3. Multielement proton activation analysis: application to airborne particulate matter

    International Nuclear Information System (INIS)

    Priest, P.; Devillers, M.; Desaedeleer, G.

    1980-01-01

    Proton activation analysis in the range of 25 to 30 MeV proton energies allows the determination of Na, Mg, Ca, Ti, Fe, Zn, As, Sr, Sn and Pb in airborne particles collected by 4 to 7 stage impactors. Under normal, not limitative irradiation and counting conditions, the determination is accurate for samples collected from 1 to 10 m 3 of air in rural atmospheres

  4. Multielement neutron activation analysis of underground water samples

    International Nuclear Information System (INIS)

    Kusaka, Yuzuru; Tsuji, Haruo; Fujimoto, Yuzo; Ishida, Keiko; Mamuro, Tetsuo.

    1980-01-01

    An instrumental neutron activation analysis by gamma-ray spectrometry with high resolution and large volume Ge (Li) detectors followed by data processing with an electronic computer was applied to the multielemental analysis to elucidate the chemical qualities of the underground water which has been widely used in the sake brewing industries in Mikage, Uozaki and Nishinomiya districts, called as miyamizu. The evaporated residues of the water samples were subjected to the neutron irradiations in reactor for 1 min at a thermal flux of 1.5 x 10 12 n.cm -2 .sec -1 and for 30 hrs at a thermal flux of 9.3 x 10 11 n.cm -2 .sec -1 or for 5 hrs at a thermal flux of 3.9 x 10 12 n.cm -2 .sec -1 . Thus, 11 elements in the former short irradiation and 38 elements in the latter two kinds of long irradiation can be analyzed. Conventional chemical analysis including atomic absorption method and others are also applied on the same samples, and putting the all results together, some considerations concerning the geochemical meaning of the analytical values are made. (author)

  5. Large scale electrolysers

    International Nuclear Information System (INIS)

    B Bello; M Junker

    2006-01-01

    Hydrogen production by water electrolysis represents nearly 4 % of the world hydrogen production. Future development of hydrogen vehicles will require large quantities of hydrogen. Installation of large scale hydrogen production plants will be needed. In this context, development of low cost large scale electrolysers that could use 'clean power' seems necessary. ALPHEA HYDROGEN, an European network and center of expertise on hydrogen and fuel cells, has performed for its members a study in 2005 to evaluate the potential of large scale electrolysers to produce hydrogen in the future. The different electrolysis technologies were compared. Then, a state of art of the electrolysis modules currently available was made. A review of the large scale electrolysis plants that have been installed in the world was also realized. The main projects related to large scale electrolysis were also listed. Economy of large scale electrolysers has been discussed. The influence of energy prices on the hydrogen production cost by large scale electrolysis was evaluated. (authors)

  6. Multielement ultratrace analysis in tungsten using secondary ion mass spectrometry

    International Nuclear Information System (INIS)

    Wilhartitz, P.; Virag, A.; Friedbacher, G.; Grasserbauer, M.

    1987-01-01

    The ever increasing demands on properties of materials create a trend also towards ultrapure products. Characterization of these materials is only possible with modern, highly sophisticated analytical techniques such as activation analysis and mass spectrometry, particularly SSMS, SIMS and GDMS. Analytical strategies were developed for the determination of about 40 elements in a tungsten matrix with high-performance SIMS. Difficulties like the elimination of interferences had to be overcome. Extrapolated detection limits were established in the range of pg/g (alkali metals, halides) to ng/g (e.g. Ta, Th). Depth profiling and ion imaging gave additional information about the lateral and the depth distribution of the elements. (orig.)

  7. Multielement analysis of iliac crest bone by neutron activation

    International Nuclear Information System (INIS)

    Aras, N.K.; Yilmaz, G.; Korkusuz, F.; Olmez, I.; Sepici, B.; Eksioglu, F.; Bode, P.

    2000-01-01

    Bone samples from iliac crest were obtained from apparently healthy female (n = 4) and male (n = 8) subjects with ages between 15-50. Cortical and trabecular parts were separated and soft tissues like fat, muscle and blood were removed. Calcium, Mg, Na, Cl, Fe, Zn, Br, Sr, and Cs were determined by instrumental neutron activation analysis and other techniques, and their relations were discussed. Fairly good agreement was obtained with literature data. These values may serve as reference values for subjects from a Turkish population. (author)

  8. Head of detector for multi-element analysis

    International Nuclear Information System (INIS)

    Frynta, Z.

    1983-01-01

    The detector head mounted on the scintillation counter consists of a hollow hexagonal rotary support axially arranged with the photomultiplier of the scintillation counter. In the walls of the hexagonal rotary support there are openings in which are inserted absorption filters. The mounting of the absorption filters on the rotary support allows the analysis of a greater number of elements without the dismantling of the head and the replacement of filters. The suitable geometry of the head is retained so that it is possible to insert the head into the hollows similarly as the scintillation counter. (J.P.)

  9. Multi-element neutron activation analysis of Brazilian coal samples

    International Nuclear Information System (INIS)

    Atalla, L.T.; Requejo, C.S.

    1982-09-01

    The elements U, Th, La, Ce, Nd, Sm, Eu, Dy, Tb, Yb, Lu, Sc, Ta, Hf, Co, Ni, Cr, Mo, Ti, V, W, In, Ga, Mn, Ba, Sr, Mg, Rb, Cs, K, Cl, Br, As, Sb, Au, Ca, Al and Fe were determined in coal samples by instrumental neutron activation analysis, by using both thermal and epithermal neutron irradiations. The irradiation times were 10 minutes and 8 or 16 hours in a position where the thermal neutron flux was about 10 12 n.cm - 2 .s - 1 and 72 non-consecutive hours for epithermal irradiation at a flux of about 10 11 n.Cm - 2 .s - 1 . After the instrumental analysis of the above mentioned elements, Zn and Se were determined with chemical separation. The relative standard deviation of, at least, 4 determinations was about + - 10% for the majority of the results. The coal samples analysed were supplied by: Cia. Estadual da Tecnologia e Saneamento Basico (CETESB-SP), Cia. de Pesquisas e Lavras Minerais (COPELMI-RS), Cia. Carbonifera Urussunga (SC), Cia. Carbonifera Prospera (SC), Cia. Carbonifera Treviso (SC), Cia. Nacional de Mineracao de Carvao do Barro Branco (SC) and Comissao Nacional de Energia Nuclear (CNEN-RJ). (Author) [pt

  10. Multi-Element Composition of Honey as a Suitable Tool for Its Authenticity Analysis

    Directory of Open Access Journals (Sweden)

    Oroian Mircea

    2015-06-01

    Full Text Available The aim of this study was to evaluate the composition of 36 honey samples of 4 different botanical origins (acacia, sun flower, tilia and honeydew from the North East region of Romania. An inductively coupled plasma-mass spectrometry (ICP-MS method was used to determine 27 elements in honey (Ag, Al, As, Ba, Be, Ca, Cd, Co, Cr, Cs, Cu, Fe, Ga, K, Li, Mg, Mn, Na, Ni, Pb, Rb, Se, Sr, Tl, U, V and Zn. We would like to achieve the following goal: to demonstrate that the qualitative and quantitative multi-element composition determination of honey can be used as a suitable tool to classify honey according to its botanical origin. The principal component analysis allowed the reduction of the 27 variables to 2 principal components which explained 74% of the total variance. The dominant elements which were strongly associated with the principal component were K, Mg and Ca. Discriminant models obtained for each kind of botanical honey confirmed that the differentiation of honeys according to their botanical origin was mainly based on multi-element composition. A correct classification of all samples was achieved with the exception of 11.1% of honeydew honeys.

  11. Correlation analysis for forced vibration test of the Hualien large scale seismic test (LSST) program

    International Nuclear Information System (INIS)

    Sugawara, Y.; Sugiyama, T.; Kobayashi, T.; Yamaya, H.; Kitamura, E.

    1995-01-01

    The correlation analysis for a forced vibration test of a 1/4-scale containment SSI test model constructed in Hualien, Taiwan was carried out for the case of after backfilling. Prior to this correlation analysis, the structural properties were revised to adjust the calculated fundamental frequency in the fixed base condition to that derived from the test results. A correlation analysis was carried out using the Lattice Model which was able to estimate the soil-structure effects with embedment. The analysis results coincide well with test results and it is concluded that the mathematical soil-structure interaction model established by the correlation analysis is efficient in estimating the dynamic soil-structure interaction effect with embedment. This mathematical model will be applied as a basic model for simulation analysis of earthquake observation records. (author). 3 refs., 12 figs., 2 tabs

  12. Soil-structure interaction analysis of large scale seismic test model at Hualien in Taiwan

    International Nuclear Information System (INIS)

    Jang, J. B.; Ser, Y. P.; Lee, J. L.

    2001-01-01

    The issue of SSI in seismic analysis and design of NPPs is getting important, as it may be inevitable to build NPPs at sites with soft foundation due to ever-increasing difficulty in acquiring new construction sites for NPPs. And, the improvement of seismic analysis technique including soil-structure interaction analysis essential to achieve reasonable seismic design for structures and equipments, etc. of NPPs. Therefore, among the existing SSI analysis programs, the most prevalent SASSI is verified through the comparison numerical analysis results with recorded response results of Hualien project in this study. As a result, SASSI accurately estimated the recorded response results for the fundamental frequency and peak acceleration of structure and was proved to be reliable and useful for the seismic analysis and design of NPPs

  13. Open-Source Pipeline for Large-Scale Data Processing, Analysis and Collaboration, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA's observational and modeled data products encompass petabytes of earth science data available for analysis, analytics, and exploitation. Unfortunately, these...

  14. Large-scale association analysis identifies 13 new susceptibility loci for coronary artery disease

    NARCIS (Netherlands)

    Schunkert, Heribert; König, Inke R.; Kathiresan, Sekar; Reilly, Muredach P.; Assimes, Themistocles L.; Holm, Hilma; Preuss, Michael; Stewart, Alexandre F. R.; Barbalic, Maja; Gieger, Christian; Absher, Devin; Aherrahrou, Zouhair; Allayee, Hooman; Altshuler, David; Anand, Sonia S.; Andersen, Karl; Anderson, Jeffrey L.; Ardissino, Diego; Ball, Stephen G.; Balmforth, Anthony J.; Barnes, Timothy A.; Becker, Diane M.; Becker, Lewis C.; Berger, Klaus; Bis, Joshua C.; Boekholdt, S. Matthijs; Boerwinkle, Eric; Braund, Peter S.; Brown, Morris J.; Burnett, Mary Susan; Buysschaert, Ian; Carlquist, John F.; Chen, Li; Cichon, Sven; Codd, Veryan; Davies, Robert W.; Dedoussis, George; Dehghan, Abbas; Demissie, Serkalem; Devaney, Joseph M.; Diemert, Patrick; Do, Ron; Doering, Angela; Eifert, Sandra; Mokhtari, Nour Eddine El; Ellis, Stephen G.; Elosua, Roberto; Engert, James C.; Epstein, Stephen E.; de Faire, Ulf; Fischer, Marcus; Folsom, Aaron R.; Freyer, Jennifer; Gigante, Bruna; Girelli, Domenico; Gretarsdottir, Solveig; Gudnason, Vilmundur; Gulcher, Jeffrey R.; Halperin, Eran; Hammond, Naomi; Hazen, Stanley L.; Hofman, Albert; Horne, Benjamin D.; Illig, Thomas; Iribarren, Carlos; Jones, Gregory T.; Jukema, J. Wouter; Kaiser, Michael A.; Kaplan, Lee M.; Kastelein, John J. P.; Khaw, Kay-Tee; Knowles, Joshua W.; Kolovou, Genovefa; Kong, Augustine; Laaksonen, Reijo; Lambrechts, Diether; Leander, Karin; Lettre, Guillaume; Li, Mingyao; Lieb, Wolfgang; Loley, Christina; Lotery, Andrew J.; Mannucci, Pier M.; Maouche, Seraya; Martinelli, Nicola; McKeown, Pascal P.; Meisinger, Christa; Meitinger, Thomas; Melander, Olle; Merlini, Pier Angelica; Mooser, Vincent; Morgan, Thomas; Mühleisen, Thomas W.; Muhlestein, Joseph B.; Münzel, Thomas; Musunuru, Kiran; Nahrstaedt, Janja; Nelson, Christopher P.; Nöthen, Markus M.; Olivieri, Oliviero; Patel, Riyaz S.; Patterson, Chris C.; Peters, Annette; Peyvandi, Flora; Qu, Liming; Quyyumi, Arshed A.; Rader, Daniel J.; Rallidis, Loukianos S.; Rice, Catherine; Rosendaal, Frits R.; Rubin, Diana; Salomaa, Veikko; Sampietro, M. Lourdes; Sandhu, Manj S.; Schadt, Eric; Schäfer, Arne; Schillert, Arne; Schreiber, Stefan; Schrezenmeir, Jürgen; Schwartz, Stephen M.; Siscovick, David S.; Sivananthan, Mohan; Sivapalaratnam, Suthesh; Smith, Albert; Smith, Tamara B.; Snoep, Jaapjan D.; Soranzo, Nicole; Spertus, John A.; Stark, Klaus; Stirrups, Kathy; Stoll, Monika; Tang, W. H. Wilson; Tennstedt, Stephanie; Thorgeirsson, Gudmundur; Thorleifsson, Gudmar; Tomaszewski, Maciej; Uitterlinden, Andre G.; van Rij, Andre M.; Voight, Benjamin F.; Wareham, Nick J.; Wells, George A.; Wichmann, H.-Erich; Wild, Philipp S.; Willenborg, Christina; Witteman, Jaqueline C. M.; Wright, Benjamin J.; Ye, Shu; Zeller, Tanja; Ziegler, Andreas; Cambien, Francois; Goodall, Alison H.; Cupples, L. Adrienne; Quertermous, Thomas; März, Winfried; Hengstenberg, Christian; Blankenberg, Stefan; Ouwehand, Willem H.; Hall, Alistair S.; Deloukas, Panos; Thompson, John R.; Stefansson, Kari; Roberts, Robert; Thorsteinsdottir, Unnur; O'Donnell, Christopher J.; McPherson, Ruth; Erdmann, Jeanette; Samani, Nilesh J.

    2011-01-01

    We performed a meta-analysis of 14 genome-wide association studies of coronary artery disease (CAD) comprising 22,233 individuals with CAD (cases) and 64,762 controls of European descent followed by genotyping of top association signals in 56,682 additional individuals. This analysis identified 13

  15. Large-scale association analysis identifies 13 new susceptibility loci for coronary artery disease

    NARCIS (Netherlands)

    H. Schunkert (Heribert); I.R. König (Inke); S. Kathiresan (Sekar); M.P. Reilly (Muredach); T.L. Assimes (Themistocles); H. Holm (Hilma); M. Preuss (Michael); A.F.R. Stewart (Alexandre); M. Barbalic (maja); C. Gieger (Christian); D. Absher (Devin); Z. Aherrahrou (Zouhair); H. Allayee (Hooman); D. Altshuler (David); S.S. Anand (Sonia); K.K. Andersen (Karl); J.L. Anderson (Jeffrey); D. Ardissino (Diego); S.G. Ball (Stephen); A.J. Balmforth (Anthony); T.A. Barnes (Timothy); D.M. Becker (Diane); K. Berger (Klaus); J.C. Bis (Joshua); S.M. Boekholdt (Matthijs); E.A. Boerwinkle (Eric); P.S. Braund (Peter); M.J. Brown (Morris); M.S. Burnett; I. Buysschaert (Ian); J.F. Carlquist (John); L. Chen (Li); S. Cichon (Sven); V. Codd (Veryan); R.W. Davies (Robert); G.V. Dedoussis (George); A. Dehghan (Abbas); S. Demissie (Serkalem); J. Devaney (Joseph); P. Diemert (Patrick); R. Do (Ron); A. Doering (Angela); S. Eifert (Sandra); N.E.E. Mokhtari; S.G. Ellis (Stephen); R. Elosua (Roberto); J.C. Engert (James); S.E. Epstein (Stephen); U. de Faire (Ulf); M. Fischer (Marcus); A.R. Folsom (Aaron); J. Freyer (Jennifer); B. Gigante (Bruna); D. Girelli (Domenico); S. Gretarsdottir (Solveig); V. Gudnason (Vilmundur); J.R. Gulcher (Jeffrey); E. Halperin (Eran); N. Hammond (Naomi); S.L. Hazen (Stanley); A. Hofman (Albert); B.D. Horne (Benjamin); T. Illig (Thomas); C. Iribarren (Carlos); G.T. Jones (Gregory); J.W. Jukema (Jan Wouter); M.A. Kaiser (Michael); R.C. Kaplan (Robert); K-T. Khaw (Kay-Tee); J.W. Knowles (Joshua); G. Kolovou (Genovefa); A. Kong (Augustine); R. Laaksonen (Reijo); D. Lambrechts (Diether); K. Leander (Karin); G. Lettre (Guillaume); X. Li (Xiaohui); W. Lieb (Wolfgang); C. Loley (Christina); A.J. Lotery (Andrew); P.M. Mannucci (Pier); S. Maouche (Seraya); N. Martinelli (Nicola); P.P. McKeown (Pascal); C. Meisinger (Christa); T. Meitinger (Thomas); O. Melander (Olle); P.A. Merlini; V. Mooser (Vincent); T. Morgan (Thomas); T.W. Mühleisen (Thomas); J.B. Muhlestein (Joseph); T. Münzel (Thomas); K. Musunuru (Kiran); J. Nahrstaedt (Janja); C.P. Nelson (Christopher P.); M.M. Nöthen (Markus); O. Olivieri (Oliviero); R.S. Patel (Riyaz); C.C. Patterson (Chris); A. Peters (Annette); F. Peyvandi (Flora); L. Qu (Liming); A.A. Quyyumi (Arshed); D.J. Rader (Daniel); L.S. Rallidis (Loukianos); C. Rice (Catherine); F.R. Rosendaal (Frits); D. Rubin (Diana); V. Salomaa (Veikko); M.L. Sampietro (Maria Lourdes); M.S. Sandhu (Manj); E.E. Schadt (Eric); A. Scḧsignfer (Arne); A. Schillert (Arne); S. Schreiber (Stefan); J. Schrezenmeir (Jürgen); S.M. Schwartz (Stephen); D.S. Siscovick (David); M. Sivananthan (Mohan); S. Sivapalaratnam (Suthesh); A.V. Smith (Albert Vernon); J.D. Snoep (Jaapjan); N. Soranzo (Nicole); J.A. Spertus (John); K. Stark (Klaus); K. Stirrups (Kathy); M. Stoll (Monika); W.H.W. Tang (Wilson); S. Tennstedt (Stephanie); G. Thorgeirsson (Gudmundur); G. Thorleifsson (Gudmar); M. Tomaszewski (Maciej); A.G. Uitterlinden (André); A.M. van Rij (Andre); B.F. Voight (Benjamin); N.J. Wareham (Nick); G.A. Wells (George); H.E. Wichmann (Heinz Erich); P.S. Wild (Philipp); C. Willenborg (Christina); J.C.M. Witteman (Jacqueline); B.J. Wright (Benjamin); S. Ye (Shu); T. Zeller (Tanja); A. Ziegler (Andreas); F. Cambien (François); A.H. Goodall (Alison); L.A. Cupples (Adrienne); T. Quertermous (Thomas); W. Mäsignrz (Winfried); C. Hengstenberg (Christian); S. Blankenberg (Stefan); W.H. Ouwehand (Willem); A.S. Hall (Alistair); J.J.P. Kastelein (John); P. Deloukas (Panagiotis); J.R. Thompson (John); K. Stefansson (Kari); R. Roberts (Robert); U. Thorsteinsdottir (Unnur); C.J. O'Donnell (Christopher); R. McPherson (Ruth); J. Erdmann (Jeanette); N.J. Samani (Nilesh)

    2011-01-01

    textabstractWe performed a meta-analysis of 14 genome-wide association studies of coronary artery disease (CAD) comprising 22,233 individuals with CAD (cases) and 64,762 controls of European descent followed by genotyping of top association signals in 56,682 additional individuals. This analysis

  16. Analyzing the State of Static Analysis : A Large-Scale Evaluation in Open Source Software

    NARCIS (Netherlands)

    Beller, M.; Bholanath, R.; McIntosh, S.; Zaidman, A.E.

    2016-01-01

    The use of automatic static analysis has been a software engineering best practice for decades. However, we still do not know a lot about its use in real-world software projects: How prevalent is the use of Automated Static Analysis Tools (ASATs) such as FindBugs and JSHint? How do developers use

  17. Ssecrett and neuroTrace: Interactive visualization and analysis tools for large-scale neuroscience data sets

    KAUST Repository

    Jeong, Wonki; Beyer, Johanna; Hadwiger, Markus; Blue, Rusty; Law, Charles; Vá zquez Reina, Amelio; Reid, Rollie Clay; Lichtman, Jeff W M D; Pfister, Hanspeter

    2010-01-01

    Recent advances in optical and electron microscopy let scientists acquire extremely high-resolution images for neuroscience research. Data sets imaged with modern electron microscopes can range between tens of terabytes to about one petabyte. These large data sizes and the high complexity of the underlying neural structures make it very challenging to handle the data at reasonably interactive rates. To provide neuroscientists flexible, interactive tools, the authors introduce Ssecrett and NeuroTrace, two tools they designed for interactive exploration and analysis of large-scale optical- and electron-microscopy images to reconstruct complex neural circuits of the mammalian nervous system. © 2010 IEEE.

  18. Analysis of effectiveness of possible queuing models at gas stations using the large-scale queuing theory

    Directory of Open Access Journals (Sweden)

    Slaviša M. Ilić

    2011-10-01

    Full Text Available This paper analyzes the effectiveness of possible models for queuing at gas stations, using a mathematical model of the large-scale queuing theory. Based on actual data collected and the statistical analysis of the expected intensity of vehicle arrivals and queuing at gas stations, the mathematical modeling of the real process of queuing was carried out and certain parameters quantified, in terms of perception of the weaknesses of the existing models and the possible benefits of an automated queuing model.

  19. Ssecrett and neuroTrace: Interactive visualization and analysis tools for large-scale neuroscience data sets

    KAUST Repository

    Jeong, Wonki

    2010-05-01

    Recent advances in optical and electron microscopy let scientists acquire extremely high-resolution images for neuroscience research. Data sets imaged with modern electron microscopes can range between tens of terabytes to about one petabyte. These large data sizes and the high complexity of the underlying neural structures make it very challenging to handle the data at reasonably interactive rates. To provide neuroscientists flexible, interactive tools, the authors introduce Ssecrett and NeuroTrace, two tools they designed for interactive exploration and analysis of large-scale optical- and electron-microscopy images to reconstruct complex neural circuits of the mammalian nervous system. © 2010 IEEE.

  20. Unified Tractable Model for Large-Scale Networks Using Stochastic Geometry: Analysis and Design

    KAUST Repository

    Afify, Laila H.

    2016-01-01

    about the interferers symbols can be approximated via the Gaussian signaling approach. The developed mathematical model presents twofold analysis unification for uplink and downlink cellular networks literature. It aligns the tangible decoding error

  1. The application of sensitivity analysis to models of large scale physiological systems

    Science.gov (United States)

    Leonard, J. I.

    1974-01-01

    A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.

  2. Proteinortho: Detection of (Co-)orthologs in large-scale analysis

    OpenAIRE

    Lechner, Marcus; Findeiß, Sven; Steiner, Lydia; Marz, Manja; Stadler, Peter F; Prohaska, Sonja J

    2011-01-01

    Abstract Background Orthology analysis is an important part of data analysis in many areas of bioinformatics such as comparative genomics and molecular phylogenetics. The ever-increasing flood of sequence data, and hence the rapidly increasing number of genomes that can be compared simultaneously, calls for efficient software tools as brute-force approaches with quadratic memory requirements become infeasible in practise. The rapid pace at which new data become available, furthermore, makes i...

  3. Large scale and low latency analysis facilities for the CMS experiment: development and operational aspects

    CERN Document Server

    Riahi, Hassen

    2010-01-01

    While a majority of CMS data analysis activities rely on the distributed computing infrastructure on the WLCG Grid, dedicated local computing facilities have been deployed to address particular requirements in terms of latency and scale. The CMS CERN Analysis Facility (CAF) was primarily designed to host a large variety of latency-critical workfows. These break down into alignment and calibration, detector commissioning and diagnosis, and high-interest physics analysis requiring fast turnaround. In order to reach the goal for fast turnaround tasks, the Workload Management group has designed a CRABServer based system to fit with two main needs: to provide a simple, familiar interface to the user (as used in the CRAB Analysis Tool[7]) and to allow an easy transition to the Tier-0 system. While the CRABServer component had been initially designed for Grid analysis by CMS end-users, with a few modifications it turned out to be also a very powerful service to manage and monitor local submissions on the CAF. Tran...

  4. Challenges in the Setup of Large-scale Next-Generation Sequencing Analysis Workflows

    Directory of Open Access Journals (Sweden)

    Pranav Kulkarni

    Full Text Available While Next-Generation Sequencing (NGS can now be considered an established analysis technology for research applications across the life sciences, the analysis workflows still require substantial bioinformatics expertise. Typical challenges include the appropriate selection of analytical software tools, the speedup of the overall procedure using HPC parallelization and acceleration technology, the development of automation strategies, data storage solutions and finally the development of methods for full exploitation of the analysis results across multiple experimental conditions. Recently, NGS has begun to expand into clinical environments, where it facilitates diagnostics enabling personalized therapeutic approaches, but is also accompanied by new technological, legal and ethical challenges. There are probably as many overall concepts for the analysis of the data as there are academic research institutions. Among these concepts are, for instance, complex IT architectures developed in-house, ready-to-use technologies installed on-site as well as comprehensive Everything as a Service (XaaS solutions. In this mini-review, we summarize the key points to consider in the setup of the analysis architectures, mostly for scientific rather than diagnostic purposes, and provide an overview of the current state of the art and challenges of the field.

  5. Data management in large-scale collaborative toxicity studies: how to file experimental data for automated statistical analysis.

    Science.gov (United States)

    Stanzel, Sven; Weimer, Marc; Kopp-Schneider, Annette

    2013-06-01

    High-throughput screening approaches are carried out for the toxicity assessment of a large number of chemical compounds. In such large-scale in vitro toxicity studies several hundred or thousand concentration-response experiments are conducted. The automated evaluation of concentration-response data using statistical analysis scripts saves time and yields more consistent results in comparison to data analysis performed by the use of menu-driven statistical software. Automated statistical analysis requires that concentration-response data are available in a standardised data format across all compounds. To obtain consistent data formats, a standardised data management workflow must be established, including guidelines for data storage, data handling and data extraction. In this paper two procedures for data management within large-scale toxicological projects are proposed. Both procedures are based on Microsoft Excel files as the researcher's primary data format and use a computer programme to automate the handling of data files. The first procedure assumes that data collection has not yet started whereas the second procedure can be used when data files already exist. Successful implementation of the two approaches into the European project ACuteTox is illustrated. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Large Scale Management of Physicists Personal Analysis Data Without Employing User and Group Quotas

    International Nuclear Information System (INIS)

    Norman, A.; Diesbug, M.; Gheith, M.; Illingworth, R.; Lyon, A.; Mengel, M.

    2015-01-01

    The ability of modern HEP experiments to acquire and process unprecedented amounts of data and simulation have lead to an explosion in the volume of information that individual scientists deal with on a daily basis. Explosion has resulted in a need for individuals to generate and keep large personal analysis data sets which represent the skimmed portions of official data collections, pertaining to their specific analysis. While a significant reduction in size compared to the original data, these personal analysis and simulation sets can be many terabytes or 10s of TB in size and consist of 10s of thousands of files. When this personal data is aggregated across the many physicists in a single analysis group or experiment it can represent data volumes on par or exceeding the official production samples which require special data handling techniques to deal with effectively.In this paper we explore the changes to the Fermilab computing infrastructure and computing models which have been developed to allow experimenters to effectively manage their personal analysis data and other data that falls outside of the typically centrally managed production chains. In particular we describe the models and tools that are being used to provide the modern neutrino experiments like NOvA with storage resources that are sufficient to meet their analysis needs, without imposing specific quotas on users or groups of users. We discuss the storage mechanisms and the caching algorithms that are being used as well as the toolkits are have been developed to allow the users to easily operate with terascale+ datasets. (paper)

  7. Analysis of large scale UO2 Na interactions performed in Europe

    International Nuclear Information System (INIS)

    Berthoud, G.; Jacobs, H.; Knowles, B.

    1994-01-01

    Analysis of the European out of pile Fuel Sodium Interaction Experiments involving kilogram masses of molten oxide is reported i.e. CORECT 2 (CEA), SUS and MFTF-B (AEA), THINA (KfK). Then common conclusions are drawn. (author)

  8. Twelve type 2 diabetes susceptibility loci identified through large-scale association analysis

    DEFF Research Database (Denmark)

    Voight, Benjamin F; Scott, Laura J; Steinthorsdottir, Valgerdur

    2010-01-01

    By combining genome-wide association data from 8,130 individuals with type 2 diabetes (T2D) and 38,987 controls of European descent and following up previously unidentified meta-analysis signals in a further 34,412 cases and 59,925 controls, we identified 12 new T2D association signals...

  9. Twelve type 2 diabetes susceptibility loci identified through large-scale association analysis

    NARCIS (Netherlands)

    B.F. Voight (Benjamin); L.J. Scott (Laura); V. Steinthorsdottir (Valgerdur); A.D. Morris (Andrew); C. Dina (Christian); R.P. Welch (Ryan); E. Zeggini (Eleftheria); C. Huth (Cornelia); Y.S. Aulchenko (Yurii); G. Thorleifsson (Gudmar); L.J. McCulloch (Laura); T. Ferreira (Teresa); H. Grallert (Harald); N. Amin (Najaf); G. Wu (Guanming); C.J. Willer (Cristen); S. Raychaudhuri (Soumya); S.A. McCarroll (Steven); C. Langenberg (Claudia); O.M. Hofmann (Oliver); J. Dupuis (Josée); L. Qi (Lu); A.V. Segrè (Ayellet); M. van Hoek (Mandy); P. Navarro (Pau); K.G. Ardlie (Kristin); B. Balkau (Beverley); R. Benediktsson (Rafn); A.J. Bennett (Amanda); R. Blagieva (Roza); E.A. Boerwinkle (Eric); L.L. Bonnycastle (Lori); K.B. Boström (Kristina Bengtsson); B. Bravenboer (Bert); S. Bumpstead (Suzannah); N.P. Burtt (Noël); G. Charpentier (Guillaume); P.S. Chines (Peter); M. Cornelis (Marilyn); D.J. Couper (David); G. Crawford (Gabe); A.S.F. Doney (Alex); K.S. Elliott (Katherine); M.R. Erdos (Michael); C.S. Fox (Caroline); C.S. Franklin (Christopher); M. Ganser (Martha); C. Gieger (Christian); N. Grarup (Niels); T. Green (Todd); S. Griffin (Simon); C.J. Groves (Christopher); C. Guiducci (Candace); S. Hadjadj (Samy); N. Hassanali (Neelam); C. Herder (Christian); B. Isomaa (Bo); A.U. Jackson (Anne); P.R.V. Johnson (Paul); T. Jørgensen (Torben); W.H.L. Kao (Wen); N. Klopp (Norman); A. Kong (Augustine); P. Kraft (Peter); J. Kuusisto (Johanna); T. Lauritzen (Torsten); M. Li (Man); A. Lieverse (Aloysius); C.M. Lindgren (Cecilia); V. Lyssenko (Valeriya); M. Marre (Michel); T. Meitinger (Thomas); K. Midthjell (Kristian); M.A. Morken (Mario); N. Narisu (Narisu); P. Nilsson (Peter); K.R. Owen (Katharine); F. Payne (Felicity); J.R.B. Perry (John); A.K. Petersen; C. Platou (Carl); C. Proença (Christine); I. Prokopenko (Inga); W. Rathmann (Wolfgang); N.W. Rayner (Nigel William); N.R. Robertson (Neil); G. Rocheleau (Ghislain); M. Roden (Michael); M.J. Sampson (Michael); R. Saxena (Richa); B.M. Shields (Beverley); P. Shrader (Peter); G. Sigurdsson (Gunnar); T. Sparsø (Thomas); K. Strassburger (Klaus); H.M. Stringham (Heather); Q. Sun (Qi); A.J. Swift (Amy); B. Thorand (Barbara); J. Tichet (Jean); T. Tuomi (Tiinamaija); R.M. van Dam (Rob); T.W. van Haeften (Timon); T.W. van Herpt (Thijs); J.V. van Vliet-Ostaptchouk (Jana); G.B. Walters (Bragi); M.N. Weedon (Michael); C. Wijmenga (Cisca); J.C.M. Witteman (Jacqueline); R.N. Bergman (Richard); S. Cauchi (Stephane); F.S. Collins (Francis); A.L. Gloyn (Anna); U. Gyllensten (Ulf); T. Hansen (Torben); W.A. Hide (Winston); G.A. Hitman (Graham); A. Hofman (Albert); D. Hunter (David); K. Hveem (Kristian); M. Laakso (Markku); K.L. Mohlke (Karen); C.N.A. Palmer (Colin); P.P. Pramstaller (Peter Paul); I. Rudan (Igor); E.J.G. Sijbrands (Eric); L.D. Stein (Lincoln); J. Tuomilehto (Jaakko); A.G. Uitterlinden (André); M. Walker (Mark); N.J. Wareham (Nick); G.R. Abecasis (Gonçalo); B.O. Boehm (Bernhard); H. Campbell (Harry); M.J. Daly (Mark); A.T. Hattersley (Andrew); F.B. Hu (Frank); J.B. Meigs (James); J.S. Pankow (James); O. Pedersen (Oluf); H.E. Wichmann (Erich); I.E. Barroso (Inês); J.C. Florez (Jose); T.M. Frayling (Timothy); L. Groop (Leif); R. Sladek (Rob); U. Thorsteinsdottir (Unnur); J.F. Wilson (James); T. Illig (Thomas); P. Froguel (Philippe); P. Tikka-Kleemola (Päivi); J-A. Zwart (John-Anker); D. Altshuler (David); M. Boehnke (Michael); M.I. McCarthy (Mark); R.M. Watanabe (Richard)

    2010-01-01

    textabstractBy combining genome-wide association data from 8,130 individuals with type 2 diabetes (T2D) and 38,987 controls of European descent and following up previously unidentified meta-analysis signals in a further 34,412 cases and 59,925 controls, we identified 12 new T2D association signals

  10. Scramjet test flow reconstruction for a large-scale expansion tube, Part 2: axisymmetric CFD analysis

    Science.gov (United States)

    Gildfind, D. E.; Jacobs, P. A.; Morgan, R. G.; Chan, W. Y. K.; Gollan, R. J.

    2017-11-01

    This paper presents the second part of a study aiming to accurately characterise a Mach 10 scramjet test flow generated using a large free-piston-driven expansion tube. Part 1 described the experimental set-up, the quasi-one-dimensional simulation of the full facility, and the hybrid analysis technique used to compute the nozzle exit test flow properties. The second stage of the hybrid analysis applies the computed 1-D shock tube flow history as an inflow to a high-fidelity two-dimensional-axisymmetric analysis of the acceleration tube. The acceleration tube exit flow history is then applied as an inflow to a further refined axisymmetric nozzle model, providing the final nozzle exit test flow properties and thereby completing the analysis. This paper presents the results of the axisymmetric analyses. These simulations are shown to closely reproduce experimentally measured shock speeds and acceleration tube static pressure histories, as well as nozzle centreline static and impact pressure histories. The hybrid scheme less successfully predicts the diameter of the core test flow; however, this property is readily measured through experimental pitot surveys. In combination, the full test flow history can be accurately determined.

  11. Large-scale gene-centric analysis identifies novel variants for coronary artery disease

    NARCIS (Netherlands)

    Butterworth, A.S.; Braund, P.S.; Hardwick, R.J.; Saleheen, D.; Peden, J.F.; Soranzo, N.; Chambers, J.C.; Kleber, M.E.; Keating, B.; Qasim, A.; Klopp, N.; Erdmann, J.; Basart, H.; Baumert, J.H.; Bezzina, C.R.; Boehm, B.O.; Brocheton, J.; Bugert, P.; Cambien, F.; Collins, R.; Couper, D.; Jong, J.S. de; Diemert, P.; Ejebe, K.; Elbers, C.C.; Elliott, P.; Fornage, M.; Frossard, P.; Garner, S.; Hunt, S.E.; Kastelein, J.J.; Klungel, O.H.; Kluter, H.; Koch, K.; Konig, I.R.; Kooner, A.S.; Liu, K.; McPherson, R.; Musameh, M.D.; Musani, S.; Papanicolaou, G.; Peters, A.; Peters, B.J.; Potter, S.; Psaty, B.M.; Rasheed, A.; Scott, J.; Seedorf, U.; Sehmi, J.S.; Sotoodehnia, N.; Stark, K.; Stephens, J.; Schoot, C.E. van der; Schouw, Y.T. van der; Harst, P. van der; Vasan, R.S.; Wilde, A.A.; Willenborg, C.; Winkelmann, B.R.; Zaidi, M.; Zhang, W.; Ziegler, A.; Koenig, W.; Matz, W.; Trip, M.D.; Reilly, M.P.; Kathiresan, S.; Schunkert, H.; Hamsten, A.; Hall, A.S.; Kooner, J.S.; Thompson, S.G.; Thompson, J.R.; Watkins, H.; Danesh, J.; Barnes, T.; Rafelt, S.; Codd, V.; Bruinsma, N.; Dekker, L.R.; Henriques, J.P.; Koch, K.T.; Winter, R.J. de; Alings, M.; Allaart, C.F.; Gorgels, A.P.; Verheugt, F.W.A.; Mueller, M.; Meisinger, C.; DerOhannessian, S.; Mehta, N.N.; Ferguson, J.; Hakonarson, H.; Matthai, W.; Wilensky, R.; Hopewell, J.C.; Parish, S.; Linksted, P.; Notman, J.; Gonzalez, H.; Young, A.; Ostley, T.; Munday, A.; Goodwin, N.; Verdon, V.; Shah, S.; Edwards, C.; Mathews, C.; Gunter, R.; Benham, J.; Davies, C.; Cobb, M.; Cobb, L.; Crowther, J.; Richards, A.; Silver, M.; Tochlin, S.; Mozley, S.; Clark, S.; Radley, M.; Kourellias, K.; Olsson, P.; Barlera, S.; Tognoni, G.; Rust, S.; Assmann, G.; Heath, S.; Zelenika, D.; Gut, I.; Green, F.; Farrall, M.; Goel, A.; Ongen, H.; Franzosi, M.G.; Lathrop, M.; Clarke, R.; Aly, A.; Anner, K.; Bjorklund, K.; Blomgren, G.; Cederschiold, B.; Danell-Toverud, K.; Eriksson, P.; Grundstedt, U.; Heinonen, M.; Hellenius, M.L.; Hooft, F. van 't; Husman, K.; Lagercrantz, J.; Larsson, A.; Larsson, M.; Mossfeldt, M.; Malarstig, A.; Olsson, G.; Sabater-Lleal, M.; Sennblad, B.; Silveira, A.; Strawbridge, R.; Soderholm, B.; Ohrvik, J.; Zaman, K.S.; Mallick, N.H.; Azhar, M.; Samad, A.; Ishaq, M.; Shah, N.; Samuel, M.; Kathiresan, S.C.; Assimes, T.L.; Holm, H.; Preuss, M.; Stewart, A.F.; Barbalic, M.; Gieger, C.; Absher, D.; Aherrahrou, Z.; Allayee, H.; Altshuler, D.; Anand, S.; Andersen, K.; Anderson, J.L.; Ardissino, D.; Ball, S.G.; Balmforth, A.J.; Barnes, T.A.; Becker, L.C.; Becker, D.M.; Berger, K.; Bis, J.C.; Boekholdt, S.M.; Boerwinkle, E.; Brown, M.J.; Burnett, M.S.; Buysschaert, I.; Carlquist, J.F.; Chen, L.; Davies, R.W.; Dedoussis, G.; Dehghan, A.; Demissie, S.; Devaney, J.; Do, R.; Doering, A.; El Mokhtari, N.E.; Ellis, S.G.; Elosua, R.; Engert, J.C.; Epstein, S.; Faire, U. de; Fischer, M.; Folsom, A.R.; Freyer, J.; Gigante, B.; Girelli, D.; Gretarsdottir, S.; Gudnason, V.; Gulcher, J.R.; Tennstedt, S.; Halperin, E.; Hammond, N.; Hazen, S.L.; Hofman, A.; Horne, B.D.; Illig, T.; Iribarren, C.; Jones, G.T.; Jukema, J.W.; Kaiser, M.A.; Kaplan, L.M.; Khaw, K.T.; Knowles, J.W.; Kolovou, G.; Kong, A.; Laaksonen, R.; Lambrechts, D.; Leander, K.; Li, M.; Lieb, W.; Lettre, G.; Loley, C.; Lotery, A.J.; Mannucci, P.M.; Martinelli, N.; McKeown, P.P.; Meitinger, T.; Melander, O.; Merlini, P.A.; Mooser, V.; Morgan, T.; Muhleisen T.W., .; Muhlestein, J.B.; Musunuru, K.; Nahrstaedt, J.; Nothen, Markus; Olivieri, O.; Peyvandi, F.; Patel, R.S.; Patterson, C.C.; Qu, L.; Quyyumi, A.A.; Rader, D.J.; Rallidis, L.S.; Rice, C.; Roosendaal, F.R.; Rubin, D.; Salomaa, V.; Sampietro, M.L.; Sandhu, M.S.; Schadt, E.; Schafer, A.; Schillert, A.; Schreiber, S.; Schrezenmeir, J.; Schwartz, S.M.; Siscovick, D.S.; Sivananthan, M.; Sivapalaratnam, S.; Smith, A.V.; Smith, T.B.; Snoep, J.D.; Spertus, J.A.; Stefansson, K.; Stirrups, K.; Stoll, M.; Tang, W.H.; Thorgeirsson, G.; Thorleifsson, G.; Tomaszewski, M.; Uitterlinden, A.G.; Rij, A.M. van; Voight, B.F.; Wareham, N.J.; AWells, G.; Wichmann, H.E.; Witteman, J.C.; Wright, B.J.; Ye, S.; Cupples, L.A.; Quertermous, T.; Marz, W.; Blankenberg, S.; Thorsteinsdottir, U.; Roberts, R.; O'Donnell, C.J.; Onland-Moret, N.C.; Setten, J. van; Bakker, P.I. de; Verschuren, W.M.; Boer, J.M.; Wijmenga, C.; Hofker, M.H.; Maitland-van der Zee, A.H.; Boer, A. de; Grobbee, D.E.; Attwood, T.; Belz, S.; Cooper, J.; Crisp-Hihn, A.; Deloukas, P.; Foad, N.; Goodall, A.H.; Gracey, J.; Gray, E.; Gwilliams, R.; Heimerl, S.; Hengstenberg, C.; Jolley, J.; Krishnan, U.; Lloyd-Jones, H.; Lugauer, I.; Lundmark, P.; Maouche, S.; Moore, J.S.; Muir, D.; Murray, E.; Nelson, C.P.; Neudert, J.; Niblett, D.; O'Leary, K.; Ouwehand, W.H.; Pollard, H.; Rankin, A.; Rice, C.M.; Sager, H.; Samani, N.J.; Sambrook, J.; Schmitz, G.; Scholz, M.; Schroeder, L.; Syvannen, A.C.; Wallace, C.

    2011-01-01

    Coronary artery disease (CAD) has a significant genetic contribution that is incompletely characterized. To complement genome-wide association (GWA) studies, we conducted a large and systematic candidate gene study of CAD susceptibility, including analysis of many uncommon and functional variants.

  12. Geological analysis of paleozoic large-scale faulting in the south-central Pyrenees

    NARCIS (Netherlands)

    Speksnijder, A.

    1986-01-01

    Detailed structural and sedimentological analysis reveals the existence of an east-west directed fundamental fault zone in the south-central Pyrenees, which has been intermittently active from (at least) the Devonian on. Emphasis is laid on the stUdy of fault-bounded post-Variscan

  13. Geological analysis of paleozoic large-scale faulting in the south-central Pyrenees

    NARCIS (Netherlands)

    Speksnijder, A.

    1986-01-01

    Detailed structural and sedimentological analysis reveals the existence of an east-west directed fundamental fault zone in the south-central Pyrenees, which has been intermittently active from (at least) the Devonian on. Emphasis is laid on the stUdy of fault-bounded post-Variscan (StephanoPermian)

  14. Large-scale association analysis identifies new risk loci for coronary artery disease

    NARCIS (Netherlands)

    Deloukas, Panos; Kanoni, Stavroula; Willenborg, Christina; Farrall, Martin; Assimes, Themistocles L.; Thompson, John R.; Ingelsson, Erik; Saleheen, Danish; Erdmann, Jeanette; Goldstein, Benjamin A.; Stirrups, Kathleen; König, Inke R.; Cazier, Jean-Baptiste; Johansson, Asa; Hall, Alistair S.; Lee, Jong-Young; Willer, Cristen J.; Chambers, John C.; Esko, Tõnu; Folkersen, Lasse; Goel, Anuj; Grundberg, Elin; Havulinna, Aki S.; Ho, Weang K.; Hopewell, Jemma C.; Eriksson, Niclas; Kleber, Marcus E.; Kristiansson, Kati; Lundmark, Per; Lyytikäinen, Leo-Pekka; Rafelt, Suzanne; Shungin, Dmitry; Strawbridge, Rona J.; Thorleifsson, Gudmar; Tikkanen, Emmi; van Zuydam, Natalie; Voight, Benjamin F.; Waite, Lindsay L.; Zhang, Weihua; Ziegler, Andreas; Absher, Devin; Altshuler, David; Balmforth, Anthony J.; Barroso, Inês; Braund, Peter S.; Burgdorf, Christof; Claudi-Boehm, Simone; Cox, David; Dimitriou, Maria; Do, Ron; Doney, Alex S. F.; El Mokhtari, NourEddine; Eriksson, Per; Fischer, Krista; Fontanillas, Pierre; Franco-Cereceda, Anders; Gigante, Bruna; Groop, Leif; Gustafsson, Stefan; Hager, Jörg; Hallmans, Göran; Han, Bok-Ghee; Hunt, Sarah E.; Kang, Hyun M.; Illig, Thomas; Kessler, Thorsten; Knowles, Joshua W.; Kolovou, Genovefa; Kuusisto, Johanna; Langenberg, Claudia; Langford, Cordelia; Leander, Karin; Lokki, Marja-Liisa; Lundmark, Anders; McCarthy, Mark I.; Meisinger, Christa; Melander, Olle; Mihailov, Evelin; Maouche, Seraya; Morris, Andrew D.; Müller-Nurasyid, Martina; Nikus, Kjell; Peden, John F.; Rayner, N. William; Rasheed, Asif; Rosinger, Silke; Rubin, Diana; Rumpf, Moritz P.; Schäfer, Arne; Sivananthan, Mohan; Song, Ci; Stewart, Alexandre F. R.; Tan, Sian-Tsung; Thorgeirsson, Gudmundur; van der Schoot, C. Ellen; Wagner, Peter J.; Wells, George A.; Wild, Philipp S.; Yang, Tsun-Po; Amouyel, Philippe; Arveiler, Dominique; Basart, Hanneke; Boehnke, Michael; Boerwinkle, Eric; Brambilla, Paolo; Cambien, Francois; Cupples, Adrienne L.; de Faire, Ulf; Dehghan, Abbas; Diemert, Patrick; Epstein, Stephen E.; Evans, Alun; Ferrario, Marco M.; Ferrières, Jean; Gauguier, Dominique; Go, Alan S.; Goodall, Alison H.; Gudnason, Villi; Hazen, Stanley L.; Holm, Hilma; Iribarren, Carlos; Jang, Yangsoo; Kähönen, Mika; Kee, Frank; Kim, Hyo-Soo; Klopp, Norman; Koenig, Wolfgang; Kratzer, Wolfgang; Kuulasmaa, Kari; Laakso, Markku; Laaksonen, Reijo; Lee, Ji-Young; Lind, Lars; Ouwehand, Willem H.; Parish, Sarah; Park, Jeong E.; Pedersen, Nancy L.; Peters, Annette; Quertermous, Thomas; Rader, Daniel J.; Salomaa, Veikko; Schadt, Eric; Shah, Svati H.; Sinisalo, Juha; Stark, Klaus; Stefansson, Kari; Trégouët, David-Alexandre; Virtamo, Jarmo; Wallentin, Lars; Wareham, Nicholas; Zimmermann, Martina E.; Nieminen, Markku S.; Hengstenberg, Christian; Sandhu, Manjinder S.; Pastinen, Tomi; Syvänen, Ann-Christine; Hovingh, G. Kees; Dedoussis, George; Franks, Paul W.; Lehtimäki, Terho; Metspalu, Andres; Zalloua, Pierre A.; Siegbahn, Agneta; Schreiber, Stefan; Ripatti, Samuli; Blankenberg, Stefan S.; Perola, Markus; Clarke, Robert; Boehm, Bernhard O.; O'Donnell, Christopher; Reilly, Muredach P.; März, Winfried; Collins, Rory; Kathiresan, Sekar; Hamsten, Anders; Kooner, Jaspal S.; Thorsteinsdottir, Unnur; Danesh, John; Palmer, Colin N. A.; Roberts, Robert; Watkins, Hugh; Schunkert, Heribert; Samani, Nilesh J.

    2013-01-01

    Coronary artery disease (CAD) is the commonest cause of death. Here, we report an association analysis in 63,746 CAD cases and 130,681 controls identifying 15 loci reaching genome-wide significance, taking the number of susceptibility loci for CAD to 46, and a further 104 independent variants (r(2)

  15. Test and Analysis of a Buckling-Critical Large-Scale Sandwich Composite Cylinder

    Science.gov (United States)

    Schultz, Marc R.; Sleight, David W.; Gardner, Nathaniel W.; Rudd, Michelle T.; Hilburger, Mark W.; Palm, Tod E.; Oldfield, Nathan J.

    2018-01-01

    Structural stability is an important design consideration for launch-vehicle shell structures and it is well known that the buckling response of such shell structures can be very sensitive to small geometric imperfections. As part of an effort to develop new buckling design guidelines for sandwich composite cylindrical shells, an 8-ft-diameter honeycomb-core sandwich composite cylinder was tested under pure axial compression to failure. The results from this test are compared with finite-element-analysis predictions and overall agreement was very good. In particular, the predicted buckling load was within 1% of the test and the character of the response matched well. However, it was found that the agreement could be improved by including composite material nonlinearity in the analysis, and that the predicted buckling initiation site was sensitive to the addition of small bending loads to the primary axial load in analyses.

  16. Geological analysis of paleozoic large-scale faulting in the south-central Pyrenees

    OpenAIRE

    Speksnijder, A.

    1986-01-01

    Detailed structural and sedimentological analysis reveals the existence of an east-west directed fundamental fault zone in the south-central Pyrenees, which has been intermittently active from (at least) the Devonian on. Emphasis is laid on the stUdy of fault-bounded post-Variscan (StephanoPermian) sedimentary basins, and the influence of Late Paleozoic faulting on the underlying Variscan basement. The present structure of the basement is rather complex as it results from multiple Variscan an...

  17. LDRD final report : robust analysis of large-scale combinatorial applications.

    Energy Technology Data Exchange (ETDEWEB)

    Carr, Robert D.; Morrison, Todd (University of Colorado, Denver, CO); Hart, William Eugene; Benavides, Nicolas L. (Santa Clara University, Santa Clara, CA); Greenberg, Harvey J. (University of Colorado, Denver, CO); Watson, Jean-Paul; Phillips, Cynthia Ann

    2007-09-01

    Discrete models of large, complex systems like national infrastructures and complex logistics frameworks naturally incorporate many modeling uncertainties. Consequently, there is a clear need for optimization techniques that can robustly account for risks associated with modeling uncertainties. This report summarizes the progress of the Late-Start LDRD 'Robust Analysis of Largescale Combinatorial Applications'. This project developed new heuristics for solving robust optimization models, and developed new robust optimization models for describing uncertainty scenarios.

  18. Large-scale linkage analysis of 1302 affected relative pairs with rheumatoid arthritis

    Science.gov (United States)

    Hamshere, Marian L; Segurado, Ricardo; Moskvina, Valentina; Nikolov, Ivan; Glaser, Beate; Holmans, Peter A

    2007-01-01

    Rheumatoid arthritis is the most common systematic autoimmune disease and its etiology is believed to have both strong genetic and environmental components. We demonstrate the utility of including genetic and clinical phenotypes as covariates within a linkage analysis framework to search for rheumatoid arthritis susceptibility loci. The raw genotypes of 1302 affected relative pairs were combined from four large family-based samples (North American Rheumatoid Arthritis Consortium, United Kingdom, European Consortium on Rheumatoid Arthritis Families, and Canada). The familiality of the clinical phenotypes was assessed. The affected relative pairs were subjected to autosomal multipoint affected relative-pair linkage analysis. Covariates were included in the linkage analysis to take account of heterogeneity within the sample. Evidence of familiality was observed with age at onset (p << 0.001) and rheumatoid factor (RF) IgM (p << 0.001), but not definite erosions (p = 0.21). Genome-wide significant evidence for linkage was observed on chromosome 6. Genome-wide suggestive evidence for linkage was observed on chromosomes 13 and 20 when conditioning on age at onset, chromosome 15 conditional on gender, and chromosome 19 conditional on RF IgM after allowing for multiple testing of covariates. PMID:18466440

  19. Large-scale kinetic energy spectra from Eulerian analysis of EOLE wind data

    Science.gov (United States)

    Desbois, M.

    1975-01-01

    A data set of 56,000 winds determined from the horizontal displacements of EOLE balloons at the 200 mb level in the Southern Hemisphere during the period October 1971-February 1972 is utilized for the computation of planetary- and synoptic-scale kinetic energy space spectra. However, the random distribution of measurements in space and time presents some problems for the spectral analysis. Two different approaches are used, i.e., a harmonic analysis of daily wind values at equi-distant points obtained by space-time interpolation of the data, and a correlation method using the direct measurements. Both methods give similar results for small wavenumbers, but the second is more accurate for higher wavenumbers (k above or equal to 10). The spectra show a maximum at wavenumbers 5 and 6 due to baroclinic instability and then decrease for high wavenumbers up to wavenumber 35 (which is the limit of the analysis), according to the inverse power law k to the negative p, with p close to 3.

  20. Non-Destructive Thermography Analysis of Impact Damage on Large-Scale CFRP Automotive Parts

    Science.gov (United States)

    Maier, Alexander; Schmidt, Roland; Oswald-Tranta, Beate; Schledjewski, Ralf

    2014-01-01

    Laminated composites are increasingly used in aeronautics and the wind energy industry, as well as in the automotive industry. In these applications, the construction and processing need to fulfill the highest requirements regarding weight and mechanical properties. Environmental issues, like fuel consumption and CO2-footprint, set new challenges in producing lightweight parts that meet the highly monitored standards for these branches. In the automotive industry, one main aspect of construction is the impact behavior of structural parts. To verify the quality of parts made from composite materials with little effort, cost and time, non-destructive test methods are increasingly used. A highly recommended non-destructive testing method is thermography analysis. In this work, a prototype for a car’s base plate was produced by using vacuum infusion. For research work, testing specimens were produced with the same multi-layer build up as the prototypes. These specimens were charged with defined loads in impact tests to simulate the effect of stone chips. Afterwards, the impacted specimens were investigated with thermography analysis. The research results in that work will help to understand the possible fields of application and the usage of thermography analysis as the first quick and economic failure detection method for automotive parts. PMID:28788464

  1. Monte Carlo sensitivity analysis of an Eulerian large-scale air pollution model

    International Nuclear Information System (INIS)

    Dimov, I.; Georgieva, R.; Ostromsky, Tz.

    2012-01-01

    Variance-based approaches for global sensitivity analysis have been applied and analyzed to study the sensitivity of air pollutant concentrations according to variations of rates of chemical reactions. The Unified Danish Eulerian Model has been used as a mathematical model simulating a remote transport of air pollutants. Various Monte Carlo algorithms for numerical integration have been applied to compute Sobol's global sensitivity indices. A newly developed Monte Carlo algorithm based on Sobol's quasi-random points MCA-MSS has been applied for numerical integration. It has been compared with some existing approaches, namely Sobol's ΛΠ τ sequences, an adaptive Monte Carlo algorithm, the plain Monte Carlo algorithm, as well as, eFAST and Sobol's sensitivity approaches both implemented in SIMLAB software. The analysis and numerical results show advantages of MCA-MSS for relatively small sensitivity indices in terms of accuracy and efficiency. Practical guidelines on the estimation of Sobol's global sensitivity indices in the presence of computational difficulties have been provided. - Highlights: ► Variance-based global sensitivity analysis is performed for the air pollution model UNI-DEM. ► The main effect of input parameters dominates over higher-order interactions. ► Ozone concentrations are influenced mostly by variability of three chemical reactions rates. ► The newly developed MCA-MSS for multidimensional integration is compared with other approaches. ► More precise approaches like MCA-MSS should be applied when the needed accuracy has not been achieved.

  2. Non-Destructive Thermography Analysis of Impact Damage on Large-Scale CFRP Automotive Parts

    Directory of Open Access Journals (Sweden)

    Alexander Maier

    2014-01-01

    Full Text Available Laminated composites are increasingly used in aeronautics and the wind energy industry, as well as in the automotive industry. In these applications, the construction and processing need to fulfill the highest requirements regarding weight and mechanical properties. Environmental issues, like fuel consumption and CO2-footprint, set new challenges in producing lightweight parts that meet the highly monitored standards for these branches. In the automotive industry, one main aspect of construction is the impact behavior of structural parts. To verify the quality of parts made from composite materials with little effort, cost and time, non-destructive test methods are increasingly used. A highly recommended non-destructive testing method is thermography analysis. In this work, a prototype for a car’s base plate was produced by using vacuum infusion. For research work, testing specimens were produced with the same multi-layer build up as the prototypes. These specimens were charged with defined loads in impact tests to simulate the effect of stone chips. Afterwards, the impacted specimens were investigated with thermography analysis. The research results in that work will help to understand the possible fields of application and the usage of thermography analysis as the first quick and economic failure detection method for automotive parts.

  3. Non-Destructive Thermography Analysis of Impact Damage on Large-Scale CFRP Automotive Parts.

    Science.gov (United States)

    Maier, Alexander; Schmidt, Roland; Oswald-Tranta, Beate; Schledjewski, Ralf

    2014-01-14

    Laminated composites are increasingly used in aeronautics and the wind energy industry, as well as in the automotive industry. In these applications, the construction and processing need to fulfill the highest requirements regarding weight and mechanical properties. Environmental issues, like fuel consumption and CO₂-footprint, set new challenges in producing lightweight parts that meet the highly monitored standards for these branches. In the automotive industry, one main aspect of construction is the impact behavior of structural parts. To verify the quality of parts made from composite materials with little effort, cost and time, non-destructive test methods are increasingly used. A highly recommended non-destructive testing method is thermography analysis. In this work, a prototype for a car's base plate was produced by using vacuum infusion. For research work, testing specimens were produced with the same multi-layer build up as the prototypes. These specimens were charged with defined loads in impact tests to simulate the effect of stone chips. Afterwards, the impacted specimens were investigated with thermography analysis. The research results in that work will help to understand the possible fields of application and the usage of thermography analysis as the first quick and economic failure detection method for automotive parts.

  4. Proteinortho: detection of (co-)orthologs in large-scale analysis.

    Science.gov (United States)

    Lechner, Marcus; Findeiss, Sven; Steiner, Lydia; Marz, Manja; Stadler, Peter F; Prohaska, Sonja J

    2011-04-28

    Orthology analysis is an important part of data analysis in many areas of bioinformatics such as comparative genomics and molecular phylogenetics. The ever-increasing flood of sequence data, and hence the rapidly increasing number of genomes that can be compared simultaneously, calls for efficient software tools as brute-force approaches with quadratic memory requirements become infeasible in practise. The rapid pace at which new data become available, furthermore, makes it desirable to compute genome-wide orthology relations for a given dataset rather than relying on relations listed in databases. The program Proteinortho described here is a stand-alone tool that is geared towards large datasets and makes use of distributed computing techniques when run on multi-core hardware. It implements an extended version of the reciprocal best alignment heuristic. We apply Proteinortho to compute orthologous proteins in the complete set of all 717 eubacterial genomes available at NCBI at the beginning of 2009. We identified thirty proteins present in 99% of all bacterial proteomes. Proteinortho significantly reduces the required amount of memory for orthology analysis compared to existing tools, allowing such computations to be performed on off-the-shelf hardware.

  5. Proteinortho: Detection of (Co-orthologs in large-scale analysis

    Directory of Open Access Journals (Sweden)

    Steiner Lydia

    2011-04-01

    Full Text Available Abstract Background Orthology analysis is an important part of data analysis in many areas of bioinformatics such as comparative genomics and molecular phylogenetics. The ever-increasing flood of sequence data, and hence the rapidly increasing number of genomes that can be compared simultaneously, calls for efficient software tools as brute-force approaches with quadratic memory requirements become infeasible in practise. The rapid pace at which new data become available, furthermore, makes it desirable to compute genome-wide orthology relations for a given dataset rather than relying on relations listed in databases. Results The program Proteinortho described here is a stand-alone tool that is geared towards large datasets and makes use of distributed computing techniques when run on multi-core hardware. It implements an extended version of the reciprocal best alignment heuristic. We apply Proteinortho to compute orthologous proteins in the complete set of all 717 eubacterial genomes available at NCBI at the beginning of 2009. We identified thirty proteins present in 99% of all bacterial proteomes. Conclusions Proteinortho significantly reduces the required amount of memory for orthology analysis compared to existing tools, allowing such computations to be performed on off-the-shelf hardware.

  6. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    International Nuclear Information System (INIS)

    Sig Drellack, Lance Prothro

    2007-01-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The

  7. Automated selected reaction monitoring data analysis workflow for large-scale targeted proteomic studies.

    Science.gov (United States)

    Surinova, Silvia; Hüttenhain, Ruth; Chang, Ching-Yun; Espona, Lucia; Vitek, Olga; Aebersold, Ruedi

    2013-08-01

    Targeted proteomics based on selected reaction monitoring (SRM) mass spectrometry is commonly used for accurate and reproducible quantification of protein analytes in complex biological mixtures. Strictly hypothesis-driven, SRM assays quantify each targeted protein by collecting measurements on its peptide fragment ions, called transitions. To achieve sensitive and accurate quantitative results, experimental design and data analysis must consistently account for the variability of the quantified transitions. This consistency is especially important in large experiments, which increasingly require profiling up to hundreds of proteins over hundreds of samples. Here we describe a robust and automated workflow for the analysis of large quantitative SRM data sets that integrates data processing, statistical protein identification and quantification, and dissemination of the results. The integrated workflow combines three software tools: mProphet for peptide identification via probabilistic scoring; SRMstats for protein significance analysis with linear mixed-effect models; and PASSEL, a public repository for storage, retrieval and query of SRM data. The input requirements for the protocol are files with SRM traces in mzXML format, and a file with a list of transitions in a text tab-separated format. The protocol is especially suited for data with heavy isotope-labeled peptide internal standards. We demonstrate the protocol on a clinical data set in which the abundances of 35 biomarker candidates were profiled in 83 blood plasma samples of subjects with ovarian cancer or benign ovarian tumors. The time frame to realize the protocol is 1-2 weeks, depending on the number of replicates used in the experiment.

  8. Large-scale analysis of Arabidopsis transcription reveals a basal co-regulation network

    Directory of Open Access Journals (Sweden)

    Chamovitz Daniel A

    2009-09-01

    Full Text Available Abstract Background Analyses of gene expression data from microarray experiments has become a central tool for identifying co-regulated, functional gene modules. A crucial aspect of such analysis is the integration of data from different experiments and different laboratories. How to weigh the contribution of different experiments is an important point influencing the final outcomes. We have developed a novel method for this integration, and applied it to genome-wide data from multiple Arabidopsis microarray experiments performed under a variety of experimental conditions. The goal of this study is to identify functional globally co-regulated gene modules in the Arabidopsis genome. Results Following the analysis of 21,000 Arabidopsis genes in 43 datasets and about 2 × 108 gene pairs, we identified a globally co-expressed gene network. We found clusters of globally co-expressed Arabidopsis genes that are enriched for known Gene Ontology annotations. Two types of modules were identified in the regulatory network that differed in their sensitivity to the node-scoring parameter; we further showed these two pertain to general and specialized modules. Some of these modules were further investigated using the Genevestigator compendium of microarray experiments. Analyses of smaller subsets of data lead to the identification of condition-specific modules. Conclusion Our method for identification of gene clusters allows the integration of diverse microarray experiments from many sources. The analysis reveals that part of the Arabidopsis transcriptome is globally co-expressed, and can be further divided into known as well as novel functional gene modules. Our methodology is general enough to apply to any set of microarray experiments, using any scoring function.

  9. SELECTIVE MODAL ANALYSIS OF POWER FLOW OSCILLATION IN LARGE SCALE LONGITUDINAL POWER SYSTEMS

    Directory of Open Access Journals (Sweden)

    Wirindi -

    2009-06-01

    Full Text Available Novel selective modal analysis for the determination of low frequency power flow oscillation behaviour based on eigenvalues with corresponding damping ratio, cumulative damping index, and participation factors is proposed. The power system being investigated consists of three large longitudinally interconnected areas with some weak tie lines. Different modes, such as exciter modes, inter area modes, and local modes of the dominant poles are fully studied to find out the significant level of system damping and other factors producing power flow instability. The nature of the energy exchange between area is determined and strategic power flow stability improvement is developed and tested.

  10. Twelve type 2 diabetes susceptibility loci identified through large-scale association analysis

    OpenAIRE

    Voight, Benjamin; Scott, Laura; Steinthorsdottir, Valgerdur; Morris, Andrew; Dina, Christian; Welch, Ryan; Zeggini, Eleftheria; Huth, Cornelia; Aulchenko, Yurii; Thorleifsson, Gudmar; McCulloch, Laura; Ferreira, Teresa; Grallert, Harald; Amin, Najaf; Wu, Guanming

    2010-01-01

    textabstractBy combining genome-wide association data from 8,130 individuals with type 2 diabetes (T2D) and 38,987 controls of European descent and following up previously unidentified meta-analysis signals in a further 34,412 cases and 59,925 controls, we identified 12 new T2D association signals with combined P 5 × 10 8. These include a second independent signal at the KCNQ1 locus; the first report, to our knowledge, of an X-chromosomal association (near DUSP9); and a further instance of ov...

  11. A large-dimensional factor analysis of the Federal Reserve's large-scale asset purchases

    DEFF Research Database (Denmark)

    Bork, Lasse

    This paper assesses the economy-wide effects of US unconventional monetary policy shocks. A precise identification of the unconventional monetary policy shocks is achieved by imposing zero and sign restrictions on a number of impulse responses from a large-dimensional dynamic factor model....... In particular, an unconventional expansionary monetary policy shock is identified as a shock that increases the Federal Reserve's market share of US treasuries and mortgage-backed securities, and leads to an improvement in the real economy and improved credit conditions. I find that an unconventional monetary...... securities by the Federal Reserve Bank avoided a severe downturn according to estimates from a counterfactual analysis....

  12. Large scale aggregate microarray analysis reveals three distinct molecular subclasses of human preeclampsia.

    Science.gov (United States)

    Leavey, Katherine; Bainbridge, Shannon A; Cox, Brian J

    2015-01-01

    Preeclampsia (PE) is a life-threatening hypertensive pathology of pregnancy affecting 3-5% of all pregnancies. To date, PE has no cure, early detection markers, or effective treatments short of the removal of what is thought to be the causative organ, the placenta, which may necessitate a preterm delivery. Additionally, numerous small placental microarray studies attempting to identify "PE-specific" genes have yielded inconsistent results. We therefore hypothesize that preeclampsia is a multifactorial disease encompassing several pathology subclasses, and that large cohort placental gene expression analysis will reveal these groups. To address our hypothesis, we utilized known bioinformatic methods to aggregate 7 microarray data sets across multiple platforms in order to generate a large data set of 173 patient samples, including 77 with preeclampsia. Unsupervised clustering of these patient samples revealed three distinct molecular subclasses of PE. This included a "canonical" PE subclass demonstrating elevated expression of known PE markers and genes associated with poor oxygenation and increased secretion, as well as two other subclasses potentially representing a poor maternal response to pregnancy and an immunological presentation of preeclampsia. Our analysis sheds new light on the heterogeneity of PE patients, and offers up additional avenues for future investigation. Hopefully, our subclassification of preeclampsia based on molecular diversity will finally lead to the development of robust diagnostics and patient-based treatments for this disorder.

  13. Performance Analysis of a Wind Turbine Driven Swash Plate Pump for Large Scale Offshore Applications

    International Nuclear Information System (INIS)

    Buhagiar, D; Sant, T

    2014-01-01

    This paper deals with the performance modelling and analysis of offshore wind turbine-driven hydraulic pumps. The concept consists of an open loop hydraulic system with the rotor main shaft directly coupled to a swash plate pump to supply pressurised sea water. A mathematical model is derived to cater for the steady state behaviour of entire system. A simplified model for the pump is implemented together with different control scheme options for regulating the rotor shaft power. A new control scheme is investigated, based on the combined use of hydraulic pressure and pitch control. Using a steady-state analysis, the study shows how the adoption of alternative control schemes in a the wind turbine-hydraulic pump system may result in higher energy yields than those from a conventional system with an electrical generator and standard pitch control for power regulation. This is in particular the case with the new control scheme investigated in this study that is based on the combined use of pressure and rotor blade pitch control

  14. Exergy analysis of large-scale helium liquefiers: Evaluating design trade-offs

    Science.gov (United States)

    Thomas, Rijo Jacob; Ghosh, Parthasarathi; Chowdhury, Kanchan

    2014-01-01

    It is known that higher heat exchanger area, more number of expanders with higher efficiency and more involved configuration with multi-pressure compression system increase the plant efficiency of a helium liquefier. However, they involve higher capital investment and larger size. Using simulation software Aspen Hysys v 7.0 and exergy analysis as the tool of analysis, authors have attempted to identify various trade-offs while selecting the number of stages, the pressure levels in compressor, the cold-end configuration, the heat exchanger surface area, the maximum allowable pressure drop in heat exchangers, the efficiency of expanders, the parallel/series connection of expanders etc. Use of more efficient cold ends reduces the number of refrigeration stages and the size of the plant. For achieving reliability along with performance, a configuration with a combination of expander and Joule-Thomson valve is found to be a better choice for cold end. Use of multi-pressure system is relevant only when the number of refrigeration stages is more than 5. Arrangement of expanders in series reduces the number of expanders as well as the heat exchanger size with slight expense of plant efficiency. Superior heat exchanger (having less pressure drop per unit heat transfer area) results in only 5% increase of plant performance even when it has 100% higher heat exchanger surface area.

  15. Large-scale analysis by SAGE reveals new mechanisms of v-erbA oncogene action

    Directory of Open Access Journals (Sweden)

    Faure Claudine

    2007-10-01

    Full Text Available Abstract Background: The v-erbA oncogene, carried by the Avian Erythroblastosis Virus, derives from the c-erbAα proto-oncogene that encodes the nuclear receptor for triiodothyronine (T3R. v-ErbA transforms erythroid progenitors in vitro by blocking their differentiation, supposedly by interference with T3R and RAR (Retinoic Acid Receptor. However, v-ErbA target genes involved in its transforming activity still remain to be identified. Results: By using Serial Analysis of Gene Expression (SAGE, we identified 110 genes deregulated by v-ErbA and potentially implicated in the transformation process. Bioinformatic analysis of promoter sequence and transcriptional assays point out a potential role of c-Myb in the v-ErbA effect. Furthermore, grouping of newly identified target genes by function revealed both expected (chromatin/transcription and unexpected (protein metabolism functions potentially deregulated by v-ErbA. We then focused our study on 15 of the new v-ErbA target genes and demonstrated by real time PCR that in majority their expression was activated neither by T3, nor RA, nor during differentiation. This was unexpected based upon the previously known role of v-ErbA. Conclusion: This paper suggests the involvement of a wealth of new unanticipated mechanisms of v-ErbA action.

  16. An Unsupervised Anomalous Event Detection and Interactive Analysis Framework for Large-scale Satellite Data

    Science.gov (United States)

    LIU, Q.; Lv, Q.; Klucik, R.; Chen, C.; Gallaher, D. W.; Grant, G.; Shang, L.

    2016-12-01

    Due to the high volume and complexity of satellite data, computer-aided tools for fast quality assessments and scientific discovery are indispensable for scientists in the era of Big Data. In this work, we have developed a framework for automated anomalous event detection in massive satellite data. The framework consists of a clustering-based anomaly detection algorithm and a cloud-based tool for interactive analysis of detected anomalies. The algorithm is unsupervised and requires no prior knowledge of the data (e.g., expected normal pattern or known anomalies). As such, it works for diverse data sets, and performs well even in the presence of missing and noisy data. The cloud-based tool provides an intuitive mapping interface that allows users to interactively analyze anomalies using multiple features. As a whole, our framework can (1) identify outliers in a spatio-temporal context, (2) recognize and distinguish meaningful anomalous events from individual outliers, (3) rank those events based on "interestingness" (e.g., rareness or total number of outliers) defined by users, and (4) enable interactively query, exploration, and analysis of those anomalous events. In this presentation, we will demonstrate the effectiveness and efficiency of our framework in the application of detecting data quality issues and unusual natural events using two satellite datasets. The techniques and tools developed in this project are applicable for a diverse set of satellite data and will be made publicly available for scientists in early 2017.

  17. Analysis of recorded earthquake response data at the Hualien large-scale seismic test site

    International Nuclear Information System (INIS)

    Hyun, C.H.; Tang, H.T.; Dermitzakis, S.; Esfandiari, S.

    1997-01-01

    A soil-structure interaction (SSI) experiment is being conducted in a seismically active region in Hualien, Taiwan. To obtain earthquake data for quantifying SSI effects and providing a basis to benchmark analysis methods, a 1/4-th scale cylindrical concrete containment model similar in shape to that of a nuclear power plant containment was constructed in the field where both the containment model and its surrounding soil, surface and sub-surface, are extensively instrumented to record earthquake data. In between September 1993 and May 1995, eight earthquakes with Richter magnitudes ranging from 4.2 to 6.2 were recorded. The author focuses on studying and analyzing the recorded data to provide information on the response characteristics of the Hualien soil-structure system, the SSI effects and the ground motion characteristics. An effort was also made to directly determine the site soil physical properties based on correlation analysis of the recorded data. No modeling simulations were attempted to try to analytically predict the SSI response of the soil and the structure. These will be the scope of a subsequent study

  18. Application of the actor model to large scale NDE data analysis

    Science.gov (United States)

    Coughlin, Chris

    2018-03-01

    The Actor model of concurrent computation discretizes a problem into a series of independent units or actors that interact only through the exchange of messages. Without direct coupling between individual components, an Actor-based system is inherently concurrent and fault-tolerant. These traits lend themselves to so-called "Big Data" applications in which the volume of data to analyze requires a distributed multi-system design. For a practical demonstration of the Actor computational model, a system was developed to assist with the automated analysis of Nondestructive Evaluation (NDE) datasets using the open source Myriad Data Reduction Framework. A machine learning model trained to detect damage in two-dimensional slices of C-Scan data was deployed in a streaming data processing pipeline. To demonstrate the flexibility of the Actor model, the pipeline was deployed on a local system and re-deployed as a distributed system without recompiling, reconfiguring, or restarting the running application.

  19. Large-scale distribution patterns of mangrove nematodes: A global meta-analysis.

    Science.gov (United States)

    Brustolin, Marco C; Nagelkerken, Ivan; Fonseca, Gustavo

    2018-05-01

    Mangroves harbor diverse invertebrate communities, suggesting that macroecological distribution patterns of habitat-forming foundation species drive the associated faunal distribution. Whether these are driven by mangrove biogeography is still ambiguous. For small-bodied taxa, local factors and landscape metrics might be as important as macroecology. We performed a meta-analysis to address the following questions: (1) can richness of mangrove trees explain macroecological patterns of nematode richness? and (2) do local landscape attributes have equal or higher importance than biogeography in structuring nematode richness? Mangrove areas of Caribbean-Southwest Atlantic, Western Indian, Central Indo-Pacific, and Southwest Pacific biogeographic regions. We used random-effects meta-analyses based on natural logarithm of the response ratio (lnRR) to assess the importance of macroecology (i.e., biogeographic regions, latitude, longitude), local factors (i.e., aboveground mangrove biomass and tree richness), and landscape metrics (forest area and shape) in structuring nematode richness from 34 mangroves sites around the world. Latitude, mangrove forest area, and forest shape index explained 19% of the heterogeneity across studies. Richness was higher at low latitudes, closer to the equator. At local scales, richness increased slightly with landscape complexity and decreased with forest shape index. Our results contrast with biogeographic diversity patterns of mangrove-associated taxa. Global-scale nematode diversity may have evolved independently of mangrove tree richness, and diversity of small-bodied metazoans is probably more closely driven by latitude and associated climates, rather than local, landscape, or global biogeographic patterns.

  20. Large-scale Metabolomic Analysis Reveals Potential Biomarkers for Early Stage Coronary Atherosclerosis.

    Science.gov (United States)

    Gao, Xueqin; Ke, Chaofu; Liu, Haixia; Liu, Wei; Li, Kang; Yu, Bo; Sun, Meng

    2017-09-18

    Coronary atherosclerosis (CAS) is the pathogenesis of coronary heart disease, which is a prevalent and chronic life-threatening disease. Initially, this disease is not always detected until a patient presents with seriously vascular occlusion. Therefore, new biomarkers for appropriate and timely diagnosis of early CAS is needed for screening to initiate therapy on time. In this study, we used an untargeted metabolomics approach to identify potential biomarkers that could enable highly sensitive and specific CAS detection. Score plots from partial least-squares discriminant analysis clearly separated early-stage CAS patients from controls. Meanwhile, the levels of 24 metabolites increased greatly and those of 18 metabolites decreased markedly in early CAS patients compared with the controls, which suggested significant metabolic dysfunction in phospholipid, sphingolipid, and fatty acid metabolism in the patients. Furthermore, binary logistic regression showed that nine metabolites could be used as a combinatorial biomarker to distinguish early-stage CAS patients from controls. The panel of nine metabolites was then tested with an independent cohort of samples, which also yielded satisfactory diagnostic accuracy (AUC = 0.890). In conclusion, our findings provide insight into the pathological mechanism of early-stage CAS and also supply a combinatorial biomarker to aid clinical diagnosis of early-stage CAS.

  1. Node-based finite element method for large-scale adaptive fluid analysis in parallel environments

    Energy Technology Data Exchange (ETDEWEB)

    Toshimitsu, Fujisawa [Tokyo Univ., Collaborative Research Center of Frontier Simulation Software for Industrial Science, Institute of Industrial Science (Japan); Genki, Yagawa [Tokyo Univ., Department of Quantum Engineering and Systems Science (Japan)

    2003-07-01

    In this paper, a FEM-based (finite element method) mesh free method with a probabilistic node generation technique is presented. In the proposed method, all computational procedures, from the mesh generation to the solution of a system of equations, can be performed fluently in parallel in terms of nodes. Local finite element mesh is generated robustly around each node, even for harsh boundary shapes such as cracks. The algorithm and the data structure of finite element calculation are based on nodes, and parallel computing is realized by dividing a system of equations by the row of the global coefficient matrix. In addition, the node-based finite element method is accompanied by a probabilistic node generation technique, which generates good-natured points for nodes of finite element mesh. Furthermore, the probabilistic node generation technique can be performed in parallel environments. As a numerical example of the proposed method, we perform a compressible flow simulation containing strong shocks. Numerical simulations with frequent mesh refinement, which are required for such kind of analysis, can effectively be performed on parallel processors by using the proposed method. (authors)

  2. Large scale meta-analysis of fragment-based screening campaigns: privileged fragments and complementary technologies.

    Science.gov (United States)

    Kutchukian, Peter S; Wassermann, Anne Mai; Lindvall, Mika K; Wright, S Kirk; Ottl, Johannes; Jacob, Jaison; Scheufler, Clemens; Marzinzik, Andreas; Brooijmans, Natasja; Glick, Meir

    2015-06-01

    A first step in fragment-based drug discovery (FBDD) often entails a fragment-based screen (FBS) to identify fragment "hits." However, the integration of conflicting results from orthogonal screens remains a challenge. Here we present a meta-analysis of 35 fragment-based campaigns at Novartis, which employed a generic 1400-fragment library against diverse target families using various biophysical and biochemical techniques. By statistically interrogating the multidimensional FBS data, we sought to investigate three questions: (1) What makes a fragment amenable for FBS? (2) How do hits from different fragment screening technologies and target classes compare with each other? (3) What is the best way to pair FBS assay technologies? In doing so, we identified substructures that were privileged for specific target classes, as well as fragments that were privileged for authentic activity against many targets. We also revealed some of the discrepancies between technologies. Finally, we uncovered a simple rule of thumb in screening strategy: when choosing two technologies for a campaign, pairing a biochemical and biophysical screen tends to yield the greatest coverage of authentic hits. © 2014 Society for Laboratory Automation and Screening.

  3. Dictionaries and distributions: Combining expert knowledge and large scale textual data content analysis : Distributed dictionary representation.

    Science.gov (United States)

    Garten, Justin; Hoover, Joe; Johnson, Kate M; Boghrati, Reihane; Iskiwitch, Carol; Dehghani, Morteza

    2018-02-01

    Theory-driven text analysis has made extensive use of psychological concept dictionaries, leading to a wide range of important results. These dictionaries have generally been applied through word count methods which have proven to be both simple and effective. In this paper, we introduce Distributed Dictionary Representations (DDR), a method that applies psychological dictionaries using semantic similarity rather than word counts. This allows for the measurement of the similarity between dictionaries and spans of text ranging from complete documents to individual words. We show how DDR enables dictionary authors to place greater emphasis on construct validity without sacrificing linguistic coverage. We further demonstrate the benefits of DDR on two real-world tasks and finally conduct an extensive study of the interaction between dictionary size and task performance. These studies allow us to examine how DDR and word count methods complement one another as tools for applying concept dictionaries and where each is best applied. Finally, we provide references to tools and resources to make this method both available and accessible to a broad psychological audience.

  4. Review of software tools for design and analysis of large scale MRM proteomic datasets.

    Science.gov (United States)

    Colangelo, Christopher M; Chung, Lisa; Bruce, Can; Cheung, Kei-Hoi

    2013-06-15

    Selective or Multiple Reaction monitoring (SRM/MRM) is a liquid-chromatography (LC)/tandem-mass spectrometry (MS/MS) method that enables the quantitation of specific proteins in a sample by analyzing precursor ions and the fragment ions of their selected tryptic peptides. Instrumentation software has advanced to the point that thousands of transitions (pairs of primary and secondary m/z values) can be measured in a triple quadrupole instrument coupled to an LC, by a well-designed scheduling and selection of m/z windows. The design of a good MRM assay relies on the availability of peptide spectra from previous discovery-phase LC-MS/MS studies. The tedious aspect of manually developing and processing MRM assays involving thousands of transitions has spurred to development of software tools to automate this process. Software packages have been developed for project management, assay development, assay validation, data export, peak integration, quality assessment, and biostatistical analysis. No single tool provides a complete end-to-end solution, thus this article reviews the current state and discusses future directions of these software tools in order to enable researchers to combine these tools for a comprehensive targeted proteomics workflow. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Large-scale analysis and forecast experiments with wind data from the Seasat A scatterometer

    Science.gov (United States)

    Baker, W. E.; Atlas, R.; Kalnay, E.; Halem, M.; Woiceshyn, P. M.; Peteherych, S.; Edelmann, D.

    1984-01-01

    A series of data assimilation experiments is performed to assess the impact of Seasat A satellite scatterometer (SASS) wind data on Goddard Laboratory for Atmospheric Sciences (GLAS) model forecasts. The SASS data are dealiased as part of an objective analysis system utilizing a three-pass procedure. The impact of the SASS data is evaluated with and without temperature soundings from the NOAA 4 Vertical Temperature Profile Radiometer (VTPR) instrument in order to study possible redundancy between surface wind data and upper air temperature data. In the northern hemisphere the SASS data are generally found to have a negligible effect on the forecasts. In the southern hemisphere the forecast impact from SASS data is somewhat larger and primarily beneficial in the absence of VTPR data. However, the inclusion of VTPR data effectively eliminates the positive impact over Australia and South America. This indicates that SASS data can be beneficial for numerical weather prediction in regions with large data gaps, but in the presence of satellite soundings the usefulness of SASS data is significantly reduced.

  6. Node-based finite element method for large-scale adaptive fluid analysis in parallel environments

    International Nuclear Information System (INIS)

    Toshimitsu, Fujisawa; Genki, Yagawa

    2003-01-01

    In this paper, a FEM-based (finite element method) mesh free method with a probabilistic node generation technique is presented. In the proposed method, all computational procedures, from the mesh generation to the solution of a system of equations, can be performed fluently in parallel in terms of nodes. Local finite element mesh is generated robustly around each node, even for harsh boundary shapes such as cracks. The algorithm and the data structure of finite element calculation are based on nodes, and parallel computing is realized by dividing a system of equations by the row of the global coefficient matrix. In addition, the node-based finite element method is accompanied by a probabilistic node generation technique, which generates good-natured points for nodes of finite element mesh. Furthermore, the probabilistic node generation technique can be performed in parallel environments. As a numerical example of the proposed method, we perform a compressible flow simulation containing strong shocks. Numerical simulations with frequent mesh refinement, which are required for such kind of analysis, can effectively be performed on parallel processors by using the proposed method. (authors)

  7. Application of the RELAP5 to the analysis of large scale integral experiments

    International Nuclear Information System (INIS)

    D'Auria, F.; Galassi, G. M.

    2000-01-01

    The present paper discusses the application of the code-modalisation to the analysis of experiments performed in the UPTF and the PANDA facility available in Germany and in Switzerland, respectively. The UPTF simulates all the internals of the reactor pressure vessel of a Pressurized Water Reactor with 1/1 scale of the geometric dimensions (diameters and lengths). The PANDA simulates the containment system of a Simplified Biling Water Reactor and the vessel of the reactor. The considered experimental dana base includes the occurrence of three-dimensional phenomena that are relevant to the refill phase of a large break Loss of Coolant Accident in a PWR and to the coupling between primary system and containment performance in a SBWR. The application of the code also required to adapt and to extend the methodology for modalisation development. The results from the qualitatively point of view are satisfactory as far as the performance of code-modalisation is concerned and do not show any important code limitation. They also confirm the suitability of the code for applications in the nuclear technology. However, this conclusion should be considered as preliminary because of the lack of independent proofs. (author)

  8. Environmental analysis of a potential district heating network powered by a large-scale cogeneration plant.

    Science.gov (United States)

    Ravina, Marco; Panepinto, Deborah; Zanetti, Maria Chiara; Genon, Giuseppe

    2017-05-01

    Among the solutions for the achievement of environmental sustainability in the energy sector, district heating (DH) with combined heat and power (CHP) systems is increasingly being used. The Italian city of Turin is in a leading position in this field, having one of the largest DH networks in Europe. The aim of this work is the analysis of a further development of the network, addressed to reduce the presence of pollutants in a city that has long been subject to high concentration levels. The environmental compatibility of this intervention, especially in terms of nitrogen oxides (NO x ) and particulate matter (PM) emissions, is evaluated. The pollutants dispersion is estimated using the CALPUFF model. The forecasting scenario is created firstly by simulating the energy production of the main generation plants in response to the estimated heat demand, and secondly by investigating the amount and the dispersion of pollutants removed due to the elimination of the centralized residential heaters. The results show a future reduction in ground level average NO x concentration ranging between 0.2 and 4 μg/m 3 . The concentration of PM remains almost unchanged. Measures are then taken to lower the uncertainty in the simulation scenarios. This study provides important information on the effects of a change of the energy configuration on air quality in an urban area. The proposed methodological approach is comprehensive and repeatable.

  9. Multiple Skills Underlie Arithmetic Performance: A Large-Scale Structural Equation Modeling Analysis

    Directory of Open Access Journals (Sweden)

    Sarit Ashkenazi

    2017-12-01

    Full Text Available Current theoretical approaches point to the importance of several cognitive skills not specific to mathematics for the etiology of mathematics disorders (MD. In the current study, we examined the role of many of these skills, specifically: rapid automatized naming, attention, reading, and visual perception, on mathematics performance among a large group of college students (N = 1,322 with a wide range of arithmetic proficiency. Using factor analysis, we discovered that our data clustered to four latent variables 1 mathematics, 2 perception speed, 3 attention and 4 reading. In subsequent structural equation modeling, we found that the latent variable perception speed had a strong and meaningful effect on mathematics performance. Moreover, sustained attention, independent from the effect of the latent variable perception speed, had a meaningful, direct effect on arithmetic fact retrieval and procedural knowledge. The latent variable reading had a modest effect on mathematics performance. Specifically, reading comprehension, independent from the effect of the latent variable reading, had a meaningful direct effect on mathematics, and particularly on number line knowledge. Attention, tested by the attention network test, had no effect on mathematics, reading or perception speed. These results indicate that multiple factors can affect mathematics performance supporting a heterogeneous approach to mathematics. These results have meaningful implications for the diagnosis and intervention of pure and comorbid learning disorders.

  10. Spatial extreme value analysis to project extremes of large-scale indicators for severe weather.

    Science.gov (United States)

    Gilleland, Eric; Brown, Barbara G; Ammann, Caspar M

    2013-09-01

    Concurrently high values of the maximum potential wind speed of updrafts ( W max ) and 0-6 km wind shear (Shear) have been found to represent conducive environments for severe weather, which subsequently provides a way to study severe weather in future climates. Here, we employ a model for the product of these variables (WmSh) from the National Center for Atmospheric Research/United States National Center for Environmental Prediction reanalysis over North America conditioned on their having extreme energy in the spatial field in order to project the predominant spatial patterns of WmSh. The approach is based on the Heffernan and Tawn conditional extreme value model. Results suggest that this technique estimates the spatial behavior of WmSh well, which allows for exploring possible changes in the patterns over time. While the model enables a method for inferring the uncertainty in the patterns, such analysis is difficult with the currently available inference approach. A variation of the method is also explored to investigate how this type of model might be used to qualitatively understand how the spatial patterns of WmSh correspond to extreme river flow events. A case study for river flows from three rivers in northwestern Tennessee is studied, and it is found that advection of WmSh from the Gulf of Mexico prevails while elsewhere, WmSh is generally very low during such extreme events. © 2013 The Authors. Environmetrics published by JohnWiley & Sons, Ltd.

  11. Large-scale automated analysis of news media: a novel computational method for obesity policy research.

    Science.gov (United States)

    Hamad, Rita; Pomeranz, Jennifer L; Siddiqi, Arjumand; Basu, Sanjay

    2015-02-01

    Analyzing news media allows obesity policy researchers to understand popular conceptions about obesity, which is important for targeting health education and policies. A persistent dilemma is that investigators have to read and manually classify thousands of individual news articles to identify how obesity and obesity-related policy proposals may be described to the public in the media. A machine learning method called "automated content analysis" that permits researchers to train computers to "read" and classify massive volumes of documents was demonstrated. 14,302 newspaper articles that mentioned the word "obesity" during 2011-2012 were identified. Four states that vary in obesity prevalence and policy (Alabama, California, New Jersey, and North Carolina) were examined. The reliability of an automated program to categorize the media's framing of obesity as an individual-level problem (e.g., diet) and/or an environmental-level problem (e.g., obesogenic environment) was tested. The automated program performed similarly to human coders. The proportion of articles with individual-level framing (27.7-31.0%) was higher than the proportion with neutral (18.0-22.1%) or environmental-level framing (16.0-16.4%) across all states and over the entire study period (Pnews media was demonstrated. © 2014 The Obesity Society.

  12. QAPgrid: a two level QAP-based approach for large-scale data analysis and visualization.

    Directory of Open Access Journals (Sweden)

    Mario Inostroza-Ponta

    Full Text Available BACKGROUND: The visualization of large volumes of data is a computationally challenging task that often promises rewarding new insights. There is great potential in the application of new algorithms and models from combinatorial optimisation. Datasets often contain "hidden regularities" and a combined identification and visualization method should reveal these structures and present them in a way that helps analysis. While several methodologies exist, including those that use non-linear optimization algorithms, severe limitations exist even when working with only a few hundred objects. METHODOLOGY/PRINCIPAL FINDINGS: We present a new data visualization approach (QAPgrid that reveals patterns of similarities and differences in large datasets of objects for which a similarity measure can be computed. Objects are assigned to positions on an underlying square grid in a two-dimensional space. We use the Quadratic Assignment Problem (QAP as a mathematical model to provide an objective function for assignment of objects to positions on the grid. We employ a Memetic Algorithm (a powerful metaheuristic to tackle the large instances of this NP-hard combinatorial optimization problem, and we show its performance on the visualization of real data sets. CONCLUSIONS/SIGNIFICANCE: Overall, the results show that QAPgrid algorithm is able to produce a layout that represents the relationships between objects in the data set. Furthermore, it also represents the relationships between clusters that are feed into the algorithm. We apply the QAPgrid on the 84 Indo-European languages instance, producing a near-optimal layout. Next, we produce a layout of 470 world universities with an observed high degree of correlation with the score used by the Academic Ranking of World Universities compiled in the The Shanghai Jiao Tong University Academic Ranking of World Universities without the need of an ad hoc weighting of attributes. Finally, our Gene Ontology-based study on

  13. Uncertainty analysis of multiple canister repository model by large-scale calculation

    International Nuclear Information System (INIS)

    Tsujimoto, K.; Okuda, H.; Ahn, J.

    2007-01-01

    A prototype uncertainty analysis has been made by using the multiple-canister radionuclide transport code, VR, for performance assessment for the high-level radioactive waste repository. Fractures in the host rock determine main conduit of groundwater, and thus significantly affect the magnitude of radionuclide release rates from the repository. In this study, the probability distribution function (PDF) for the number of connected canisters in the same fracture cluster that bears water flow has been determined in a Monte-Carlo fashion by running the FFDF code with assumed PDFs for fracture geometry. The uncertainty for the release rate of 237 Np from a hypothetical repository containing 100 canisters has been quantitatively evaluated by using the VR code with PDFs for the number of connected canisters and the near field rock porosity. The calculation results show that the mass transport is greatly affected by (1) the magnitude of the radionuclide source determined by the number of connected canisters by the fracture cluster, and (2) the canister concentration effect in the same fracture network. The results also show the two conflicting tendencies that the more fractures in the repository model space, the greater average value but the smaller uncertainty of the peak fractional release rate is. To perform a vast amount of calculation, we have utilized the Earth Simulator and SR8000. The multi-level hybrid programming method is applied in the optimization to exploit high performance of the Earth Simulator. The Latin Hypercube Sampling has been utilized to reduce the number of samplings in Monte-Carlo calculation. (authors)

  14. Large scale association analysis identifies three susceptibility loci for coronary artery disease.

    Directory of Open Access Journals (Sweden)

    Stephanie Saade

    Full Text Available Genome wide association studies (GWAS and their replications that have associated DNA variants with myocardial infarction (MI and/or coronary artery disease (CAD are predominantly based on populations of European or Eastern Asian descent. Replication of the most significantly associated polymorphisms in multiple populations with distinctive genetic backgrounds and lifestyles is crucial to the understanding of the pathophysiology of a multifactorial disease like CAD. We have used our Lebanese cohort to perform a replication study of nine previously identified CAD/MI susceptibility loci (LTA, CDKN2A-CDKN2B, CELSR2-PSRC1-SORT1, CXCL12, MTHFD1L, WDR12, PCSK9, SH2B3, and SLC22A3, and 88 genes in related phenotypes. The study was conducted on 2,002 patients with detailed demographic, clinical characteristics, and cardiac catheterization results. One marker, rs6922269, in MTHFD1L was significantly protective against MI (OR=0.68, p=0.0035, while the variant rs4977574 in CDKN2A-CDKN2B was significantly associated with MI (OR=1.33, p=0.0086. Associations were detected after adjustment for family history of CAD, gender, hypertension, hyperlipidemia, diabetes, and smoking. The parallel study of 88 previously published genes in related phenotypes encompassed 20,225 markers, three quarters of which with imputed genotypes The study was based on our genome-wide genotype data set, with imputation across the whole genome to HapMap II release 22 using HapMap CEU population as a reference. Analysis was conducted on both the genotyped and imputed variants in the 88 regions covering selected genes. This approach replicated HNRNPA3P1-CXCL12 association with CAD and identified new significant associations of CDKAL1, ST6GAL1, and PTPRD with CAD. Our study provides evidence for the importance of the multifactorial aspect of CAD/MI and describes genes predisposing to their etiology.

  15. Disease gene characterization through large-scale co-expression analysis.

    Directory of Open Access Journals (Sweden)

    Allen Day

    2009-12-01

    Full Text Available In the post genome era, a major goal of biology is the identification of specific roles for individual genes. We report a new genomic tool for gene characterization, the UCLA Gene Expression Tool (UGET.Celsius, the largest co-normalized microarray dataset of Affymetrix based gene expression, was used to calculate the correlation between all possible gene pairs on all platforms, and generate stored indexes in a web searchable format. The size of Celsius makes UGET a powerful gene characterization tool. Using a small seed list of known cartilage-selective genes, UGET extended the list of known genes by identifying 32 new highly cartilage-selective genes. Of these, 7 of 10 tested were validated by qPCR including the novel cartilage-specific genes SDK2 and FLJ41170. In addition, we retrospectively tested UGET and other gene expression based prioritization tools to identify disease-causing genes within known linkage intervals. We first demonstrated this utility with UGET using genetically heterogeneous disorders such as Joubert syndrome, microcephaly, neuropsychiatric disorders and type 2 limb girdle muscular dystrophy (LGMD2 and then compared UGET to other gene expression based prioritization programs which use small but discrete and well annotated datasets. Finally, we observed a significantly higher gene correlation shared between genes in disease networks associated with similar complex or Mendelian disorders.UGET is an invaluable resource for a geneticist that permits the rapid inclusion of expression criteria from one to hundreds of genes in genomic intervals linked to disease. By using thousands of arrays UGET annotates and prioritizes genes better than other tools especially with rare tissue disorders or complex multi-tissue biological processes. This information can be critical in prioritization of candidate genes for sequence analysis.

  16. The role of reservoir storage in large-scale surface water availability analysis for Europe

    Science.gov (United States)

    Garrote, L. M.; Granados, A.; Martin-Carrasco, F.; Iglesias, A.

    2017-12-01

    A regional assessment of current and future water availability in Europe is presented in this study. The assessment was made using the Water Availability and Adaptation Policy Analysis (WAAPA) model. The model was built on the river network derived from the Hydro1K digital elevation maps, including all major river basins of Europe. Reservoir storage volume was taken from the World Register of Dams of ICOLD, including all dams with storage capacity over 5 hm3. Potential Water Availability is defined as the maximum amount of water that could be supplied at a certain point of the river network to satisfy a regular demand under pre-specified reliability requirements. Water availability is the combined result of hydrological processes, which determine streamflow in natural conditions, and human intervention, which determines the available hydraulic infrastructure to manage water and establishes water supply conditions through operating rules. The WAAPA algorithm estimates the maximum demand that can be supplied at every node of the river network accounting for the regulation capacity of reservoirs under different management scenarios. The model was run for a set of hydrologic scenarios taken from the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP), where the PCRGLOBWB hydrological model was forced with results from five global climate models. Model results allow the estimation of potential water stress by comparing water availability to projections of water abstractions along the river network under different management alternatives. The set of sensitivity analyses performed showed the effect of policy alternatives on water availability and highlighted the large uncertainties linked to hydrological and anthropological processes.

  17. Detailed Modeling and Irreversible Transfer Process Analysis of a Multi-Element Thermoelectric Generator System

    Science.gov (United States)

    Xiao, Heng; Gou, Xiaolong; Yang, Suwen

    2011-05-01

    Thermoelectric (TE) power generation technology, due to its several advantages, is becoming a noteworthy research direction. Many researchers conduct their performance analysis and optimization of TE devices and related applications based on the generalized thermoelectric energy balance equations. These generalized TE equations involve the internal irreversibility of Joule heating inside the thermoelectric device and heat leakage through the thermoelectric couple leg. However, it is assumed that the thermoelectric generator (TEG) is thermally isolated from the surroundings except for the heat flows at the cold and hot junctions. Since the thermoelectric generator is a multi-element device in practice, being composed of many fundamental TE couple legs, the effect of heat transfer between the TE couple leg and the ambient environment is not negligible. In this paper, based on basic theories of thermoelectric power generation and thermal science, detailed modeling of a thermoelectric generator taking account of the phenomenon of energy loss from the TE couple leg is reported. The revised generalized thermoelectric energy balance equations considering the effect of heat transfer between the TE couple leg and the ambient environment have been derived. Furthermore, characteristics of a multi-element thermoelectric generator with irreversibility have been investigated on the basis of the new derived TE equations. In the present investigation, second-law-based thermodynamic analysis (exergy analysis) has been applied to the irreversible heat transfer process in particular. It is found that the existence of the irreversible heat convection process causes a large loss of heat exergy in the TEG system, and using thermoelectric generators for low-grade waste heat recovery has promising potential. The results of irreversibility analysis, especially irreversible effects on generator system performance, based on the system model established in detail have guiding significance for

  18. Exploring Large Scale Data Analysis and Visualization for ARM Data Discovery Using NoSQL Technologies

    Science.gov (United States)

    Krishna, B.; Gustafson, W. I., Jr.; Vogelmann, A. M.; Toto, T.; Devarakonda, R.; Palanisamy, G.

    2016-12-01

    This paper presents a new way of providing ARM data discovery through data analysis and visualization services. ARM stands for Atmospheric Radiation Measurement. This Program was created to study cloud formation processes and their influence on radiative transfer and also include additional measurements of aerosol and precipitation at various highly instrumented ground and mobile stations. The total volume of ARM data is roughly 900TB. The current search for ARM data is performed by using its metadata, such as the site name, instrument name, date, etc. NoSQL technologies were explored to improve the capabilities of data searching, not only by their metadata, but also by using the measurement values. Two technologies that are currently being implemented for testing are Apache Cassandra (noSQL database) and Apache Spark (noSQL based analytics framework). Both of these technologies were developed to work in a distributed environment and hence can handle large data for storing and analytics. D3.js is a JavaScript library that can generate interactive data visualizations in web browsers by making use of commonly used SVG, HTML5, and CSS standards. To test the performance of NoSQL for ARM data, we will be using ARM's popular measurements to locate the data based on its value. Recently noSQL technology has been applied to a pilot project called LASSO, which stands for LES ARM Symbiotic Simulation and Observation Workflow. LASSO will be packaging LES output and observations in "data bundles" and analyses will require the ability for users to analyze both observations and LES model output either individually or together across multiple time periods. The LASSO implementation strategy suggests that enormous data storage is required to store the above mentioned quantities. Thus noSQL was used to provide a powerful means to store portions of the data that provided users with search capabilities on each simulation's traits through a web application. Based on the user selection

  19. An analysis of employee exposure to organic dust at large-scale composting facilities

    Science.gov (United States)

    Sykes, P.; Allen, J. A.; Wildsmith, J. D.; Jones, K. P.

    2009-02-01

    The occupational health implications from exposure to dust, endotoxin and 1-3 β Glucan at commercial composting sites are uncertain. This study aims to establish employee exposure levels to inhalable and respirable dust, endotoxin and 1-3 β Glucan during various operational practices in the composting process. Personal samples were collected and the inhalable and respirable dust fractions were determined by gravimetric analysis. Endotoxin concentrations were determined using a Limulus Amebocyte Lysate assay (LAL). 1-3 β Glucan levels were estimated using a specific blocking agent to establish the contribution that these compounds gave to the original endotoxin assay. Employees' exposure to dust was found to be generally lower than the levels stipulated in the Control of Substances Hazardous to Health Regulations (COSHH) 2002 (as amended), (median inhalable fraction 1.08 mg/m3, min 0.25 mg/m3 max 10.80 mg/m3, median respirable fraction 0.05 mg/m3, min 0.02 mg/m3, max 1.49 mg/m3). Determination of the biological component of the dust showed that employees' exposures to endotoxin were elevated (median 31.5 EU/m3, min 2.00 EU/m3, max 1741.78 EU/m3), particularly when waste was agitated (median 175.0 EU/m3, min 2.03 EU/m3, max 1741.78 EU/m3). Eight out of 32 (25%) of the personal exposure data for endotoxin exceeded the 200 EU/m3 temporary legal limit adopted in the Netherlands and thirteen out of 32 (40.6%) exceeded the suggested 50 EU/m3 guidance level suggested to protect workers from respiratory health effects. A significant correlation was observed between employee inhalable dust exposure and personal endotoxin concentration (r = 0.728, phealth risks associated with endotoxin exposure at composting sites. Employee exposure levels and dose-response disease mechanisms are not well understood at this present time. Consequently, in light of this uncertainty, it is recommended that a precautionary approach be adopted in managing the potential health risks associated

  20. Aerodynamic loads calculation and analysis for large scale wind turbine based on combining BEM modified theory with dynamic stall model

    Energy Technology Data Exchange (ETDEWEB)

    Dai, J.C. [College of Mechanical and Electrical Engineering, Central South University, Changsha (China); School of Electromechanical Engineering, Hunan University of Science and Technology, Xiangtan (China); Hu, Y.P.; Liu, D.S. [School of Electromechanical Engineering, Hunan University of Science and Technology, Xiangtan (China); Long, X. [Hara XEMC Windpower Co., Ltd., Xiangtan (China)

    2011-03-15

    The aerodynamic loads for MW scale horizontal-axis wind turbines are calculated and analyzed in the established coordinate systems which are used to describe the wind turbine. In this paper, the blade element momentum (BEM) theory is employed and some corrections, such as Prandtl and Buhl models, are carried out. Based on the B-L semi-empirical dynamic stall (DS) model, a new modified DS model for NACA63-4xx airfoil is adopted. Then, by combing BEM modified theory with DS model, a set of calculation method of aerodynamic loads for large scale wind turbines is proposed, in which some influence factors such as wind shear, tower, tower and blade vibration are considered. The research results show that the presented dynamic stall model is good enough for engineering purpose; the aerodynamic loads are influenced by many factors such as tower shadow, wind shear, dynamic stall, tower and blade vibration, etc, with different degree; the single blade endures periodical changing loads but the variations of the rotor shaft power caused by the total aerodynamic torque in edgewise direction are very small. The presented study approach of aerodynamic loads calculation and analysis is of the university, and helpful for thorough research of loads reduction on large scale wind turbines. (author)

  1. Parallel Dynamic Analysis of a Large-Scale Water Conveyance Tunnel under Seismic Excitation Using ALE Finite-Element Method

    Directory of Open Access Journals (Sweden)

    Xiaoqing Wang

    2016-01-01

    Full Text Available Parallel analyses about the dynamic responses of a large-scale water conveyance tunnel under seismic excitation are presented in this paper. A full three-dimensional numerical model considering the water-tunnel-soil coupling is established and adopted to investigate the tunnel’s dynamic responses. The movement and sloshing of the internal water are simulated using the multi-material Arbitrary Lagrangian Eulerian (ALE method. Nonlinear fluid–structure interaction (FSI between tunnel and inner water is treated by using the penalty method. Nonlinear soil-structure interaction (SSI between soil and tunnel is dealt with by using the surface to surface contact algorithm. To overcome computing power limitations and to deal with such a large-scale calculation, a parallel algorithm based on the modified recursive coordinate bisection (MRCB considering the balance of SSI and FSI loads is proposed and used. The whole simulation is accomplished on Dawning 5000 A using the proposed MRCB based parallel algorithm optimized to run on supercomputers. The simulation model and the proposed approaches are validated by comparison with the added mass method. Dynamic responses of the tunnel are analyzed and the parallelism is discussed. Besides, factors affecting the dynamic responses are investigated. Better speedup and parallel efficiency show the scalability of the parallel method and the analysis results can be used to aid in the design of water conveyance tunnels.

  2. CImbinator: a web-based tool for drug synergy analysis in small- and large-scale datasets.

    Science.gov (United States)

    Flobak, Åsmund; Vazquez, Miguel; Lægreid, Astrid; Valencia, Alfonso

    2017-08-01

    Drug synergies are sought to identify combinations of drugs particularly beneficial. User-friendly software solutions that can assist analysis of large-scale datasets are required. CImbinator is a web-service that can aid in batch-wise and in-depth analyzes of data from small-scale and large-scale drug combination screens. CImbinator offers to quantify drug combination effects, using both the commonly employed median effect equation, as well as advanced experimental mathematical models describing dose response relationships. CImbinator is written in Ruby and R. It uses the R package drc for advanced drug response modeling. CImbinator is available at http://cimbinator.bioinfo.cnio.es , the source-code is open and available at https://github.com/Rbbt-Workflows/combination_index . A Docker image is also available at https://hub.docker.com/r/mikisvaz/rbbt-ci_mbinator/ . asmund.flobak@ntnu.no or miguel.vazquez@cnio.es. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  3. Low-cost and large-scale flexible SERS-cotton fabric as a wipe substrate for surface trace analysis

    Science.gov (United States)

    Chen, Yanmin; Ge, Fengyan; Guang, Shanyi; Cai, Zaisheng

    2018-04-01

    The large-scale surface enhanced Raman scattering (SERS) cotton fabrics were fabricated based on traditional woven ones using a dyeing-like method of vat dyes, where silver nanoparticles (Ag NPs) were in-situ synthesized by 'dipping-reducing-drying' process. By controlling the concentration of AgNO3 solution, the optimal SERS cotton fabric was obtained, which had a homogeneous close packing of Ag NPs. The SERS cotton fabric was employed to detect p-Aminothiophenol (PATP). It was found that the new fabric possessed excellent reproducibility (about 20%), long-term stability (about 57 days) and high SERS sensitivity with a detected concentration as low as 10-12 M. Furthermore, owing to the excellent mechanical flexibility and good absorption ability, the SERS cotton fabric was employed to detect carbaryl on the surface of an apple by simply swabbing, which showed great potential in fast trace analysis. More importantly, this study may realize large-scale production with low cost by a traditional cotton fabric.

  4. Energy Decomposition Analysis Based on Absolutely Localized Molecular Orbitals for Large-Scale Density Functional Theory Calculations in Drug Design.

    Science.gov (United States)

    Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K

    2016-07-12

    We report the development and implementation of an energy decomposition analysis (EDA) scheme in the ONETEP linear-scaling electronic structure package. Our approach is hybrid as it combines the localized molecular orbital EDA (Su, P.; Li, H. J. Chem. Phys., 2009, 131, 014102) and the absolutely localized molecular orbital EDA (Khaliullin, R. Z.; et al. J. Phys. Chem. A, 2007, 111, 8753-8765) to partition the intermolecular interaction energy into chemically distinct components (electrostatic, exchange, correlation, Pauli repulsion, polarization, and charge transfer). Limitations shared in EDA approaches such as the issue of basis set dependence in polarization and charge transfer are discussed, and a remedy to this problem is proposed that exploits the strictly localized property of the ONETEP orbitals. Our method is validated on a range of complexes with interactions relevant to drug design. We demonstrate the capabilities for large-scale calculations with our approach on complexes of thrombin with an inhibitor comprised of up to 4975 atoms. Given the capability of ONETEP for large-scale calculations, such as on entire proteins, we expect that our EDA scheme can be applied in a large range of biomolecular problems, especially in the context of drug design.

  5. Tokyo Tech–Hitotsubashi Interdisciplinary Conference : New Approaches to the Analysis of Large-Scale Business and Economic Data

    CERN Document Server

    Takayasu, Misako; Takayasu, Hideki; Econophysics Approaches to Large-Scale Business Data and Financial Crisis

    2010-01-01

    The new science of econophysics has arisen out of the information age. As large-scale economic data are being increasingly generated by industries and enterprises worldwide, researchers from fields such as physics, mathematics, and information sciences are becoming involved. The vast number of transactions taking place, both in the financial markets and in the retail sector, is usually studied by economists and management and now by econophysicists. Using cutting-edge tools of computational analysis while searching for regularities and “laws” such as those found in the natural sciences, econophysicists have come up with intriguing results. The ultimate aim is to establish fundamental data collection and analysis techniques that embrace the expertise of a variety of academic disciplines. This book comprises selected papers from the international conference on novel analytical approaches to economic data held in Tokyo in March 2009. The papers include detailed reports on the market behavior during the finan...

  6. Multielement determination in some egyptian vegetables by instrumental neutron activation analysis

    International Nuclear Information System (INIS)

    Tadros, N.A.; Abdel-Fattah, A.A.; Sanad, W.A.

    1999-01-01

    Nondestructive instrumental neutron activation analysis (INAA) technique, with thermal neutrons, has been applied for multielement determination of major, trace and ultra trace elements in eleven types of the public public egyptian, edible vegetables, namely dill, moulokhyia, okra negro bean, parsley, green pea, grape leaves, spinach, mint, celery and salad chervil, cultivated and collected from El-Maadi, Cairo, E G. Concentrations of Na, K, Ca, Sc, Cr, Fe, Co, Ni, Zn, Rb, Zr, Nb, Mo, Sb, Cs, Ba, La, Ce, Tb, Yb, Hf, Ta, Th and U were determined. The standard reference materials (SRM's) G-2, J G-1 and MAG-1, provided from IAEA, were used, and high accuracy of the work was assured. The results were discussed

  7. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python.

    Science.gov (United States)

    Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri

    2014-01-01

    In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.

  8. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python

    Directory of Open Access Journals (Sweden)

    Nicolas eRey-Villamizar

    2014-04-01

    Full Text Available In this article, we describe use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis task, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral brain tissue images surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels, 6,000$times$10,000$times$500 voxels with 16 bits/voxel, implying image sizes exceeding 250GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analytics for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment consisting. Our Python script enables efficient data storage and movement between compute and storage servers, logging all processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.

  9. Consistency and Variability in Talk about "Diversity": An Empirical Analysis of Discursive Scope in Swiss Large Scale Enterprises

    Directory of Open Access Journals (Sweden)

    Anja Ostendorp

    2009-02-01

    Full Text Available Traditionally discussions of "diversity" in organizations either refer to an ideal "management" of a diverse workforce or to specific concerns of minorities. The term diversity, however, entails a growing number of translations. Highlighting this diversity of diversity, the concept cannot be merely conceived of as either social-normative or economic-functional. Therefore, the present study empirically scrutinizes the current scope of diversity-talk in Swiss large scale enterprises from a discursive psychological perspective. First, it provides five so-called interpretative repertoires which focus on: image, market, minorities, themes, and difference. Second, it discusses why and how persons oscillate between consistency and variability whenever they draw upon these different repertoires. Finally, it points out possibilities to combine them. This empirical approach to diversity in organizations offers new aspects to the current debate on diversity and introduces crucial concepts of a discursive psychological analysis. URN: urn:nbn:de:0114-fqs090218

  10. Spatio-temporal spike train analysis for large scale networks using the maximum entropy principle and Monte Carlo method

    International Nuclear Information System (INIS)

    Nasser, Hassan; Cessac, Bruno; Marre, Olivier

    2013-01-01

    Understanding the dynamics of neural networks is a major challenge in experimental neuroscience. For that purpose, a modelling of the recorded activity that reproduces the main statistics of the data is required. In the first part, we present a review on recent results dealing with spike train statistics analysis using maximum entropy models (MaxEnt). Most of these studies have focused on modelling synchronous spike patterns, leaving aside the temporal dynamics of the neural activity. However, the maximum entropy principle can be generalized to the temporal case, leading to Markovian models where memory effects and time correlations in the dynamics are properly taken into account. In the second part, we present a new method based on Monte Carlo sampling which is suited for the fitting of large-scale spatio-temporal MaxEnt models. The formalism and the tools presented here will be essential to fit MaxEnt spatio-temporal models to large neural ensembles. (paper)

  11. Water Saving and Cost Analysis of Large-Scale Implementation of Domestic Rain Water Harvesting in Minor Mediterranean Islands

    Directory of Open Access Journals (Sweden)

    Alberto Campisano

    2017-11-01

    Full Text Available This paper describes a novel methodology to evaluate the benefits of large-scale installation of domestic Rain Water Harvesting (RWH systems in multi-story buildings. The methodology was specifically developed for application to small settlements of the minor Mediterranean islands characterized by sharp fluctuations in precipitation and water demands between winter and summer periods. The methodology is based on the combined use of regressive models for water saving evaluation and of geospatial analysis tools for semi-automatic collection of spatial information at the building/household level. An application to the old town of Lipari (Aeolian islands showed potential for high yearly water savings (between 30% and 50%, with return on investment in less than 15 years for about 50% of the installed RWH systems.

  12. Energy modeling and analysis for optimal grid integration of large-scale variable renewables using hydrogen storage in Japan

    International Nuclear Information System (INIS)

    Komiyama, Ryoichi; Otsuki, Takashi; Fujii, Yasumasa

    2015-01-01

    Although the extensive introduction of VRs (variable renewables) will play an essential role to resolve energy and environmental issues in Japan after the Fukushima nuclear accident, its large-scale integration would pose a technical challenge in the grid management; as one of technical countermeasures, hydrogen storage receives much attention, as well as rechargeable battery, for controlling the intermittency of VR power output. For properly planning renewable energy policies, energy system modeling is important to quantify and qualitatively understand its potential benefits and impacts. This paper analyzes the optimal grid integration of large-scale VRs using hydrogen storage in Japan by developing a high time-resolution optimal power generation mix model. Simulation results suggest that the installation of hydrogen storage is promoted by both its cost reduction and CO 2 regulation policy. In addition, hydrogen storage turns out to be suitable for storing VR energy in a long period of time. Finally, through a sensitivity analysis of rechargeable battery cost, hydrogen storage is economically competitive with rechargeable battery; the cost of both technologies should be more elaborately recognized for formulating effective energy policies to integrate massive VRs into the country's power system in an economical manner. - Highlights: • Authors analyze hydrogen storage coupled with VRs (variable renewables). • Simulation analysis is done by developing an optimal power generation mix model. • Hydrogen storage installation is promoted by its cost decline and CO 2 regulation. • Hydrogen storage is suitable for storing VR energy in a long period of time. • Hydrogen storage is economically competitive with rechargeable battery

  13. Max-Min SINR in Large-Scale Single-Cell MU-MIMO: Asymptotic Analysis and Low Complexity Transceivers

    KAUST Repository

    Sifaou, Houssem

    2016-12-28

    This work focuses on the downlink and uplink of large-scale single-cell MU-MIMO systems in which the base station (BS) endowed with M antennas communicates with K single-antenna user equipments (UEs). Particularly, we aim at reducing the complexity of the linear precoder and receiver that maximize the minimum signal-to-interference-plus-noise ratio subject to a given power constraint. To this end, we consider the asymptotic regime in which M and K grow large with a given ratio. Tools from random matrix theory (RMT) are then used to compute, in closed form, accurate approximations for the parameters of the optimal precoder and receiver, when imperfect channel state information (modeled by the generic Gauss-Markov formulation form) is available at the BS. The asymptotic analysis allows us to derive the asymptotically optimal linear precoder and receiver that are characterized by a lower complexity (due to the dependence on the large scale components of the channel) and, possibly, by a better resilience to imperfect channel state information. However, the implementation of both is still challenging as it requires fast inversions of large matrices in every coherence period. To overcome this issue, we apply the truncated polynomial expansion (TPE) technique to the precoding and receiving vector of each UE and make use of RMT to determine the optimal weighting coefficients on a per- UE basis that asymptotically solve the max-min SINR problem. Numerical results are used to validate the asymptotic analysis in the finite system regime and to show that the proposed TPE transceivers efficiently mimic the optimal ones, while requiring much lower computational complexity.

  14. Large-Scale Genomic Analysis of Codon Usage in Dengue Virus and Evaluation of Its Phylogenetic Dependence

    Directory of Open Access Journals (Sweden)

    Edgar E. Lara-Ramírez

    2014-01-01

    Full Text Available The increasing number of dengue virus (DENV genome sequences available allows identifying the contributing factors to DENV evolution. In the present study, the codon usage in serotypes 1–4 (DENV1–4 has been explored for 3047 sequenced genomes using different statistics methods. The correlation analysis of total GC content (GC with GC content at the three nucleotide positions of codons (GC1, GC2, and GC3 as well as the effective number of codons (ENC, ENCp versus GC3 plots revealed mutational bias and purifying selection pressures as the major forces influencing the codon usage, but with distinct pressure on specific nucleotide position in the codon. The correspondence analysis (CA and clustering analysis on relative synonymous codon usage (RSCU within each serotype showed similar clustering patterns to the phylogenetic analysis of nucleotide sequences for DENV1–4. These clustering patterns are strongly related to the virus geographic origin. The phylogenetic dependence analysis also suggests that stabilizing selection acts on the codon usage bias. Our analysis of a large scale reveals new feature on DENV genomic evolution.

  15. Large-Scale Genomic Analysis of Codon Usage in Dengue Virus and Evaluation of Its Phylogenetic Dependence

    Science.gov (United States)

    Lara-Ramírez, Edgar E.; Salazar, Ma Isabel; López-López, María de Jesús; Salas-Benito, Juan Santiago; Sánchez-Varela, Alejandro

    2014-01-01

    The increasing number of dengue virus (DENV) genome sequences available allows identifying the contributing factors to DENV evolution. In the present study, the codon usage in serotypes 1–4 (DENV1–4) has been explored for 3047 sequenced genomes using different statistics methods. The correlation analysis of total GC content (GC) with GC content at the three nucleotide positions of codons (GC1, GC2, and GC3) as well as the effective number of codons (ENC, ENCp) versus GC3 plots revealed mutational bias and purifying selection pressures as the major forces influencing the codon usage, but with distinct pressure on specific nucleotide position in the codon. The correspondence analysis (CA) and clustering analysis on relative synonymous codon usage (RSCU) within each serotype showed similar clustering patterns to the phylogenetic analysis of nucleotide sequences for DENV1–4. These clustering patterns are strongly related to the virus geographic origin. The phylogenetic dependence analysis also suggests that stabilizing selection acts on the codon usage bias. Our analysis of a large scale reveals new feature on DENV genomic evolution. PMID:25136631

  16. Numerical Analysis of Soil Settlement Prediction and Its Application In Large-Scale Marine Reclamation Artificial Island Project

    Directory of Open Access Journals (Sweden)

    Zhao Jie

    2017-11-01

    Full Text Available In an artificial island construction project based on the large-scale marine reclamation land, the soil settlement is a key to affect the late safe operation of the whole field. To analyze the factors of the soil settlement in a marine reclamation project, the SEM method in the soil micro-structural analysis method is used to test and study six soil samples such as the representative silt, mucky silty clay, silty clay and clay in the area. The structural characteristics that affect the soil settlement are obtained by observing the SEM charts at different depths. By combining numerical calculation method of Terzaghi’s one-dimensional and Biot’s two-dimensional consolidation theory, the one-dimensional and two-dimensional creep models are established and the numerical calculation results of two consolidation theories are compared in order to predict the maximum settlement of the soils 100 years after completion. The analysis results indicate that the micro-structural characteristics are the essential factor to affect the settlement in this area. Based on numerical analysis of one-dimensional and two-dimensional settlement, the settlement law and trend obtained by two numerical analysis method is similar. The analysis of this paper can provide reference and guidance to the project related to the marine reclamation land.

  17. Large-Scale Compute-Intensive Analysis via a Combined In-situ and Co-scheduling Workflow Approach

    Energy Technology Data Exchange (ETDEWEB)

    Messer, Bronson [ORNL; Sewell, Christopher [Los Alamos National Laboratory (LANL); Heitmann, Katrin [ORNL; Finkel, Dr. Hal J [Argonne National Laboratory (ANL); Fasel, Patricia [Los Alamos National Laboratory (LANL); Zagaris, George [Lawrence Livermore National Laboratory (LLNL); Pope, Adrian [Los Alamos National Laboratory (LANL); Habib, Salman [ORNL; Parete-Koon, Suzanne T [ORNL

    2015-01-01

    Large-scale simulations can produce tens of terabytes of data per analysis cycle, complicating and limiting the efficiency of workflows. Traditionally, outputs are stored on the file system and analyzed in post-processing. With the rapidly increasing size and complexity of simulations, this approach faces an uncertain future. Trending techniques consist of performing the analysis in situ, utilizing the same resources as the simulation, and/or off-loading subsets of the data to a compute-intensive analysis system. We introduce an analysis framework developed for HACC, a cosmological N-body code, that uses both in situ and co-scheduling approaches for handling Petabyte-size outputs. An initial in situ step is used to reduce the amount of data to be analyzed, and to separate out the data-intensive tasks handled off-line. The analysis routines are implemented using the PISTON/VTK-m framework, allowing a single implementation of an algorithm that simultaneously targets a variety of GPU, multi-core, and many-core architectures.

  18. Multi-element analysis of emeralds and associated rocks by k{sub 0} neutron activation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Acharya, R.N.; Mondal, R.K.; Burte, P.P.; Nair, A.G.C.; Reddy, N.B.Y.; Reddy, L.K.; Reddy, A.V.R.; Manohar, S.B

    2000-12-15

    Multi-element analysis was carried out in natural emeralds, their associated rocks and one sample of beryl obtained from Rajasthan, India. The concentrations of 21 elements were assayed by Instrumental Neutron Activation Analysis using the k{sub 0} method (k{sub 0} INAA method) and high-resolution gamma ray spectrometry. The data reveal the segregation of some elements from associated (trapped and host) rocks to the mineral beryl forming the gemstones. A reference rock standard of the US Geological Survey (USGS BCR-1) was also analysed as a control of the method.

  19. Systems Perturbation Analysis of a Large-Scale Signal Transduction Model Reveals Potentially Influential Candidates for Cancer Therapeutics

    Science.gov (United States)

    Puniya, Bhanwar Lal; Allen, Laura; Hochfelder, Colleen; Majumder, Mahbubul; Helikar, Tomáš

    2016-01-01

    Dysregulation in signal transduction pathways can lead to a variety of complex disorders, including cancer. Computational approaches such as network analysis are important tools to understand system dynamics as well as to identify critical components that could be further explored as therapeutic targets. Here, we performed perturbation analysis of a large-scale signal transduction model in extracellular environments that stimulate cell death, growth, motility, and quiescence. Each of the model’s components was perturbed under both loss-of-function and gain-of-function mutations. Using 1,300 simulations under both types of perturbations across various extracellular conditions, we identified the most and least influential components based on the magnitude of their influence on the rest of the system. Based on the premise that the most influential components might serve as better drug targets, we characterized them for biological functions, housekeeping genes, essential genes, and druggable proteins. The most influential components under all environmental conditions were enriched with several biological processes. The inositol pathway was found as most influential under inactivating perturbations, whereas the kinase and small lung cancer pathways were identified as the most influential under activating perturbations. The most influential components were enriched with essential genes and druggable proteins. Moreover, known cancer drug targets were also classified in influential components based on the affected components in the network. Additionally, the systemic perturbation analysis of the model revealed a network motif of most influential components which affect each other. Furthermore, our analysis predicted novel combinations of cancer drug targets with various effects on other most influential components. We found that the combinatorial perturbation consisting of PI3K inactivation and overactivation of IP3R1 can lead to increased activity levels of apoptosis

  20. Multi-element determination in environmental samples by mass spectrometric isotope dilution analysis using thermal ionization. Pt. 2

    International Nuclear Information System (INIS)

    Hilpert, K.; Waidmann, E.

    1988-01-01

    An analytical procedure for the multi-element analysis of the elements Fe, Ni, Cu, Zn, Ga, Rb, Sr, Cd, Ba, Tl, and Pb in pine needles by mass spectrometric isotope dilution analysis using thermal ionization has been reported in Part I of this paper. This procedure is now transferred to the non-vegetable material 'Oyster Tissue' (Standard Reference Material 1566, National Bureau of Standards, USA). By a modification of the analytical procedure, it was possible to determine Cr in this material in addition to the aforementioned elements. No concentrations are certified for the elements Ga, Ba and Tl analyzed in this work. The concentrations of the remaining elements obtained by the multi-element analysis agree well with those certified. (orig.)

  1. Information contained within the large scale gas injection test (Lasgit) dataset exposed using a bespoke data analysis tool-kit

    International Nuclear Information System (INIS)

    Bennett, D.P.; Thomas, H.R.; Cuss, R.J.; Harrington, J.F.; Vardon, P.J.

    2012-01-01

    Document available in extended abstract form only. The Large Scale Gas Injection Test (Lasgit) is a field scale experiment run by the British Geological Survey (BGS) and is located approximately 420 m underground at SKB's Aespoe Hard Rock Laboratory (HRL) in Sweden. It has been designed to study the impact on safety of gas build up within a KBS-3V concept high level radioactive waste repository. Lasgit has been in almost continuous operation for approximately seven years and is still underway. An analysis of the dataset arising from the Lasgit experiment with particular attention to the smaller scale features and phenomenon recorded has been undertaken in parallel to the macro scale analysis performed by the BGS. Lasgit is a highly instrumented, frequently sampled and long-lived experiment leading to a substantial dataset containing in excess of 14.7 million datum points. The data is anticipated to include a wealth of information, including information regarding overall processes as well as smaller scale or 'second order' features. Due to the size of the dataset coupled with the detailed analysis of the dataset required and the reduction in subjectivity associated with measurement compared to observation, computational analysis is essential. Moreover, due to the length of operation and complexity of experimental activity, the Lasgit dataset is not typically suited to 'out of the box' time series analysis algorithms. In particular, the features that are not suited to standard algorithms include non-uniformities due to (deliberate) changes in sample rate at various points in the experimental history and missing data due to hardware malfunction/failure causing interruption of logging cycles. To address these features a computational tool-kit capable of performing an Exploratory Data Analysis (EDA) on long-term, large-scale datasets with non-uniformities has been developed. Particular tool-kit abilities include: the parameterization of signal variation in the dataset

  2. Statistical analysis of error rate of large-scale single flux quantum logic circuit by considering fluctuation of timing parameters

    International Nuclear Information System (INIS)

    Yamanashi, Yuki; Masubuchi, Kota; Yoshikawa, Nobuyuki

    2016-01-01

    The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.

  3. Statistical analysis of error rate of large-scale single flux quantum logic circuit by considering fluctuation of timing parameters

    Energy Technology Data Exchange (ETDEWEB)

    Yamanashi, Yuki, E-mail: yamanasi@ynu.ac.jp [Department of Electrical and Computer Engineering, Yokohama National University, Tokiwadai 79-5, Hodogaya-ku, Yokohama 240-8501 (Japan); Masubuchi, Kota; Yoshikawa, Nobuyuki [Department of Electrical and Computer Engineering, Yokohama National University, Tokiwadai 79-5, Hodogaya-ku, Yokohama 240-8501 (Japan)

    2016-11-15

    The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.

  4. Research Guidelines in the Era of Large-scale Collaborations: An Analysis of Genome-wide Association Study Consortia

    Science.gov (United States)

    Austin, Melissa A.; Hair, Marilyn S.; Fullerton, Stephanie M.

    2012-01-01

    Scientific research has shifted from studies conducted by single investigators to the creation of large consortia. Genetic epidemiologists, for example, now collaborate extensively for genome-wide association studies (GWAS). The effect has been a stream of confirmed disease-gene associations. However, effects on human subjects oversight, data-sharing, publication and authorship practices, research organization and productivity, and intellectual property remain to be examined. The aim of this analysis was to identify all research consortia that had published the results of a GWAS analysis since 2005, characterize them, determine which have publicly accessible guidelines for research practices, and summarize the policies in these guidelines. A review of the National Human Genome Research Institute’s Catalog of Published Genome-Wide Association Studies identified 55 GWAS consortia as of April 1, 2011. These consortia were comprised of individual investigators, research centers, studies, or other consortia and studied 48 different diseases or traits. Only 14 (25%) were found to have publicly accessible research guidelines on consortia websites. The available guidelines provide information on organization, governance, and research protocols; half address institutional review board approval. Details of publication, authorship, data-sharing, and intellectual property vary considerably. Wider access to consortia guidelines is needed to establish appropriate research standards with broad applicability to emerging forms of large-scale collaboration. PMID:22491085

  5. Large-scale analysis of peptide sequence variants: the case for high-field asymmetric waveform ion mobility spectrometry.

    Science.gov (United States)

    Creese, Andrew J; Smart, Jade; Cooper, Helen J

    2013-05-21

    Large scale analysis of proteins by mass spectrometry is becoming increasingly routine; however, the presence of peptide isomers remains a significant challenge for both identification and quantitation in proteomics. Classes of isomers include sequence inversions, structural isomers, and localization variants. In many cases, liquid chromatography is inadequate for separation of peptide isomers. The resulting tandem mass spectra are composite, containing fragments from multiple precursor ions. The benefits of high-field asymmetric waveform ion mobility spectrometry (FAIMS) for proteomics have been demonstrated by a number of groups, but previously work has focused on extending proteome coverage generally. Here, we present a systematic study of the benefits of FAIMS for a key challenge in proteomics, that of peptide isomers. We have applied FAIMS to the analysis of a phosphopeptide library comprising the sequences GPSGXVpSXAQLX(K/R) and SXPFKXpSPLXFG(K/R), where X = ADEFGLSTVY. The library has defined limits enabling us to make valid conclusions regarding FAIMS performance. The library contains numerous sequence inversions and structural isomers. In addition, there are large numbers of theoretical localization variants, allowing false localization rates to be determined. The FAIMS approach is compared with reversed-phase liquid chromatography and strong cation exchange chromatography. The FAIMS approach identified 35% of the peptide library, whereas LC-MS/MS alone identified 8% and LC-MS/MS with strong cation exchange chromatography prefractionation identified 17.3% of the library.

  6. Large-scale dynamical influence of a gravity wave generated over the Antarctic Peninsula – regional modelling and budget analysis

    Directory of Open Access Journals (Sweden)

    JOEL Arnault

    2013-03-01

    Full Text Available The case study of a mountain wave triggered by the Antarctic Peninsula on 6 October 2005, which has already been documented in the literature, is chosen here to quantify the associated gravity wave forcing on the large-scale flow, with a budget analysis of the horizontal wind components and horizontal kinetic energy. In particular, a numerical simulation using the Weather Research and Forecasting (WRF model is compared to a control simulation with flat orography to separate the contribution of the mountain wave from that of other synoptic processes of non-orographic origin. The so-called differential budgets of horizontal wind components and horizontal kinetic energy (after subtracting the results from the simulation without orography are then averaged horizontally and vertically in the inner domain of the simulation to quantify the mountain wave dynamical influence at this scale. This allows for a quantitative analysis of the simulated mountain wave's dynamical influence, including the orographically induced pressure drag, the counterbalancing wave-induced vertical transport of momentum from the flow aloft, the momentum and energy exchanges with the outer flow at the lateral and upper boundaries, the effect of turbulent mixing, the dynamics associated with geostrophic re-adjustment of the inner flow, the deceleration of the inner flow, the secondary generation of an inertia–gravity wave and the so-called baroclinic conversion of energy between potential energy and kinetic energy.

  7. Large-Scale Cognitive GWAS Meta-Analysis Reveals Tissue-Specific Neural Expression and Potential Nootropic Drug Targets

    Directory of Open Access Journals (Sweden)

    Max Lam

    2017-11-01

    Full Text Available Here, we present a large (n = 107,207 genome-wide association study (GWAS of general cognitive ability (“g”, further enhanced by combining results with a large-scale GWAS of educational attainment. We identified 70 independent genomic loci associated with general cognitive ability. Results showed significant enrichment for genes causing Mendelian disorders with an intellectual disability phenotype. Competitive pathway analysis implicated the biological processes of neurogenesis and synaptic regulation, as well as the gene targets of two pharmacologic agents: cinnarizine, a T-type calcium channel blocker, and LY97241, a potassium channel inhibitor. Transcriptome-wide and epigenome-wide analysis revealed that the implicated loci were enriched for genes expressed across all brain regions (most strongly in the cerebellum. Enrichment was exclusive to genes expressed in neurons but not oligodendrocytes or astrocytes. Finally, we report genetic correlations between cognitive ability and disparate phenotypes including psychiatric disorders, several autoimmune disorders, longevity, and maternal age at first birth.

  8. The analysis of energy consumption and greenhouse gas emissions of a large-scale commercial building in Shanghai, China

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2016-02-01

    Full Text Available Reasonable test, diagnosis, and analysis are meaningful for building energy efficiency retrofit and management. Energy consumption and greenhouse gas emission of a large-scale commercial building are described in this article. Basic information about energy consumption equipment is included in the investigation. Further diagnoses about the operational state of air-conditioning water systems, and ducted systems were implemented. Energy consumption decreased 200 kWh/m2 per year from 2007 to 2009 after energy-saving reconstruction in 2006. Next, a carbon audit was carried out; this comprised CO2 emission statistics associated with the energy use and categorization and structural analysis (categorization refers to energy categorization and structural analysis means the composition and its proportion relationship of all kinds of primary energy and secondary energy in energy production or consumption. Greenhouse gas emissions could be less than 150 kg/m2 per year from 2007 to 2009. An analysis of the correlation between CO2 emissions, building gross domestic product, and energy efficiency is also presented. This article makes an analysis on the energy utilization and energy-saving reconstruction of a public commercial building in Shanghai and then makes an analysis of carbon audit about greenhouse gas emissions related to energy utilization (it analyzes the status of building’s energy utilization and greenhouse gas emissions, to have a more comprehensive understanding on the internal relationship between energy consumption and its greenhouse gas emissions and provide researchful reference data for the development with reduction strategies of greenhouse gas emission in future building.

  9. Multi-element analysis of lubricant oil by WDXRF technique using thin-film sample preparation

    International Nuclear Information System (INIS)

    Scapin, M. A.; Salvador, V. L. R.; Lopes, C. D.; Sato, I. M.

    2006-01-01

    The quantitative analysis of the chemical elements in matrices like oils or gels represents a challenge for the analytical chemists. The classics methods or instrumental techniques such as atomic absorption spectrometry (AAS) and plasma optical emission spectrometry (ICP-OES) need chemical treatments, mainly sample dissolution and degradation processes. X-ray fluorescence technique allows a direct and multi-element analysis without previous sample treatments. In this work, a sensible method for the determination of elements Mg, Al, Si, P, Ca, Ti, V, Cr, Mn, Fe, Ni, Cu, Zn, Mo, Ag, Sn, Ba and Pb in lubricating oil is presented. The x-ray fluorescence (WDXRF) technique using linear regression method and thin film sample preparation was used. The validation of the methodology (repeatability and accuracy) was obtained by the analysis of the standard reference materials SRM Alpha AESAR lot 703527D, applying the Chauvenet, Cochrane, ANOVA and Z-score statistical tests. The method presents a relative standard deviation lower than 10% for all the elements, except for Pb determination (RSD Pb 15%). The Z-score values for all the elements were in the range -2 < Z < 2, indicating a very good accuracy.(Full text)

  10. Large-scale analysis of antisense transcription in wheat using the Affymetrix GeneChip Wheat Genome Array

    Directory of Open Access Journals (Sweden)

    Settles Matthew L

    2009-05-01

    -antisense transcript pairs, analysis of the gene ontology terms showed a significant over-representation of transcripts involved in energy production. These included several representations of ATP synthase, photosystem proteins and RUBISCO, which indicated that photosynthesis is likely to be regulated by antisense transcripts. Conclusion This study demonstrated the novel use of an adapted labeling protocol and a 3'IVT GeneChip array for large-scale identification of antisense transcription in wheat. The results show that antisense transcription is relatively abundant in wheat, and may affect the expression of valuable agronomic phenotypes. Future work should select potentially interesting transcript pairs for further functional characterization to determine biological activity.

  11. Large-scale network analysis of imagination reveals extended but limited top-down components in human visual cognition.

    Directory of Open Access Journals (Sweden)

    Verkhlyutov V.M.

    2014-12-01

    Full Text Available We investigated whole-brain functional magnetic resonance imaging (fMRI activation in a group of 21 healthy adult subjects during perception, imagination and remembering of two dynamic visual scenarios. Activation of the posterior parts of the cortex prevailed when watching videos. The cognitive tasks of imagination and remembering were accompanied by a predominant activity in the anterior parts of the cortex. An independent component analysis identified seven large-scale cortical networks with relatively invariant spatial distributions across all experimental conditions. The time course of their activation over experimental sessions was task-dependent. These detected networks can be interpreted as a recombination of resting state networks. Both central and peripheral networks were identified within the primary visual cortex. The central network around the caudal pole of BA17 and centers of other visual areas was activated only by direct visual stimulation, while the peripheral network responded to the presentation of visual information as well as to the cognitive tasks of imagination and remembering. The latter result explains the particular susceptibility of peripheral and twilight vision to cognitive top-down influences that often result in false-alarm detections.

  12. Distribution of ground rigidity and ground model for seismic response analysis in Hualian project of large scale seismic test

    International Nuclear Information System (INIS)

    Kokusho, T.; Nishi, K.; Okamoto, T.; Tanaka, Y.; Ueshima, T.; Kudo, K.; Kataoka, T.; Ikemi, M.; Kawai, T.; Sawada, Y.; Suzuki, K.; Yajima, K.; Higashi, S.

    1997-01-01

    An international joint research program called HLSST is proceeding. HLSST is large-scale seismic test (LSST) to investigate soil-structure interaction (SSI) during large earthquake in the field in Hualien, a high seismic region in Taiwan. A 1/4-scale model building was constructed on the gravelly soil in this site, and the backfill material of crushed stone was placed around the model plant after excavation for the construction. Also the model building and the foundation ground were extensively instrumental to monitor structure and ground response. To accurately evaluate SSI during earthquakes, geotechnical investigation and forced vibration test were performed during construction process namely before/after base excavation, after structure construction and after backfilling. And the distribution of the mechanical properties of the gravelly soil and the backfill are measured after the completion of the construction by penetration test and PS-logging etc. This paper describes the distribution and the change of the shear wave velocity (V s ) measured by the field test. Discussion is made on the effect of overburden pressure during the construction process on V s in the neighbouring soil and, further on the numerical soil model for SSI analysis. (orig.)

  13. Modified Principal Component Analysis for Identifying Key Environmental Indicators and Application to a Large-Scale Tidal Flat Reclamation

    Directory of Open Access Journals (Sweden)

    Kejian Chu

    2018-01-01

    Full Text Available Identification of the key environmental indicators (KEIs from a large number of environmental variables is important for environmental management in tidal flat reclamation areas. In this study, a modified principal component analysis approach (MPCA has been developed for determining the KEIs. The MPCA accounts for the two important attributes of the environmental variables: pollution status and temporal variation, in addition to the commonly considered numerical divergence attribute. It also incorporates the distance correlation (dCor to replace the Pearson’s correlation to measure the nonlinear interrelationship between the variables. The proposed method was applied to the Tiaozini sand shoal, a large-scale tidal flat reclamation region in China. Five KEIs were identified as dissolved inorganic nitrogen, Cd, petroleum in the water column, Hg, and total organic carbon in the sediment. The identified KEIs were shown to respond well to the biodiversity of phytoplankton. This demonstrated that the identified KEIs adequately represent the environmental condition in the coastal marine system. Therefore, the MPCA is a practicable method for extracting effective indicators that have key roles in the coastal and marine environment.

  14. A New Perspective on Polyploid Fragaria (Strawberry) Genome Composition Based on Large-Scale, Multi-Locus Phylogenetic Analysis.

    Science.gov (United States)

    Yang, Yilong; Davis, Thomas M

    2017-12-01

    The subgenomic compositions of the octoploid (2n = 8× = 56) strawberry (Fragaria) species, including the economically important cultivated species Fragaria x ananassa, have been a topic of long-standing interest. Phylogenomic approaches utilizing next-generation sequencing technologies offer a new window into species relationships and the subgenomic compositions of polyploids. We have conducted a large-scale phylogenetic analysis of Fragaria (strawberry) species using the Fluidigm Access Array system and 454 sequencing platform. About 24 single-copy or low-copy nuclear genes distributed across the genome were amplified and sequenced from 96 genomic DNA samples representing 16 Fragaria species from diploid (2×) to decaploid (10×), including the most extensive sampling of octoploid taxa yet reported. Individual gene trees were constructed by different tree-building methods. Mosaic genomic structures of diploid Fragaria species consisting of sequences at different phylogenetic positions were observed. Our findings support the presence in octoploid species of genetic signatures from at least five diploid ancestors (F. vesca, F. iinumae, F. bucharica, F. viridis, and at least one additional allele contributor of unknown identity), and questions the extent to which distinct subgenomes are preserved over evolutionary time in the allopolyploid Fragaria species. In addition, our data support divergence between the two wild octoploid species, F. virginiana and F. chiloensis. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  15. Analysis of the electricity demand of Greece for optimal planning of a large-scale hybrid renewable energy system

    Science.gov (United States)

    Tyralis, Hristos; Karakatsanis, Georgios; Tzouka, Katerina; Mamassis, Nikos

    2015-04-01

    The Greek electricity system is examined for the period 2002-2014. The demand load data are analysed at various time scales (hourly, daily, seasonal and annual) and they are related to the mean daily temperature and the gross domestic product (GDP) of Greece for the same time period. The prediction of energy demand, a product of the Greek Independent Power Transmission Operator, is also compared with the demand load. Interesting results about the change of the electricity demand scheme after the year 2010 are derived. This change is related to the decrease of the GDP, during the period 2010-2014. The results of the analysis will be used in the development of an energy forecasting system which will be a part of a framework for optimal planning of a large-scale hybrid renewable energy system in which hydropower plays the dominant role. Acknowledgement: This research was funded by the Greek General Secretariat for Research and Technology through the research project Combined REnewable Systems for Sustainable ENergy DevelOpment (CRESSENDO; grant number 5145)

  16. Social Network Analysis and Mining to Monitor and Identify Problems with Large-Scale Information and Communication Technology Interventions.

    Science.gov (United States)

    da Silva, Aleksandra do Socorro; de Brito, Silvana Rossy; Vijaykumar, Nandamudi Lankalapalli; da Rocha, Cláudio Alex Jorge; Monteiro, Maurílio de Abreu; Costa, João Crisóstomo Weyl Albuquerque; Francês, Carlos Renato Lisboa

    2016-01-01

    The published literature reveals several arguments concerning the strategic importance of information and communication technology (ICT) interventions for developing countries where the digital divide is a challenge. Large-scale ICT interventions can be an option for countries whose regions, both urban and rural, present a high number of digitally excluded people. Our goal was to monitor and identify problems in interventions aimed at certification for a large number of participants in different geographical regions. Our case study is the training at the Telecentros.BR, a program created in Brazil to install telecenters and certify individuals to use ICT resources. We propose an approach that applies social network analysis and mining techniques to data collected from Telecentros.BR dataset and from the socioeconomics and telecommunications infrastructure indicators of the participants' municipalities. We found that (i) the analysis of interactions in different time periods reflects the objectives of each phase of training, highlighting the increased density in the phase in which participants develop and disseminate their projects; (ii) analysis according to the roles of participants (i.e., tutors or community members) reveals that the interactions were influenced by the center (or region) to which the participant belongs (that is, a community contained mainly members of the same region and always with the presence of tutors, contradicting expectations of the training project, which aimed for intense collaboration of the participants, regardless of the geographic region); (iii) the social network of participants influences the success of the training: that is, given evidence that the degree of the community member is in the highest range, the probability of this individual concluding the training is 0.689; (iv) the North region presented the lowest probability of participant certification, whereas the Northeast, which served municipalities with similar

  17. Identifying HIV associated neurocognitive disorder using large-scale Granger causality analysis on resting-state functional MRI

    Science.gov (United States)

    DSouza, Adora M.; Abidin, Anas Z.; Leistritz, Lutz; Wismüller, Axel

    2017-02-01

    We investigate the applicability of large-scale Granger Causality (lsGC) for extracting a measure of multivariate information flow between pairs of regional brain activities from resting-state functional MRI (fMRI) and test the effectiveness of these measures for predicting a disease state. Such pairwise multivariate measures of interaction provide high-dimensional representations of connectivity profiles for each subject and are used in a machine learning task to distinguish between healthy controls and individuals presenting with symptoms of HIV Associated Neurocognitive Disorder (HAND). Cognitive impairment in several domains can occur as a result of HIV infection of the central nervous system. The current paradigm for assessing such impairment is through neuropsychological testing. With fMRI data analysis, we aim at non-invasively capturing differences in brain connectivity patterns between healthy subjects and subjects presenting with symptoms of HAND. To classify the extracted interaction patterns among brain regions, we use a prototype-based learning algorithm called Generalized Matrix Learning Vector Quantization (GMLVQ). Our approach to characterize connectivity using lsGC followed by GMLVQ for subsequent classification yields good prediction results with an accuracy of 87% and an area under the ROC curve (AUC) of up to 0.90. We obtain a statistically significant improvement (p<0.01) over a conventional Granger causality approach (accuracy = 0.76, AUC = 0.74). High accuracy and AUC values using our multivariate method to connectivity analysis suggests that our approach is able to better capture changes in interaction patterns between different brain regions when compared to conventional Granger causality analysis known from the literature.

  18. Multielement comparison of instrumental neutron activation analysis techniques using reference materials

    International Nuclear Information System (INIS)

    Ratner, R.T.; Vernetson, W.G.

    1995-01-01

    Several instrumental neutron activation analysis techniques (parametric, comparative, and k o -standardization) are evaluated using three reference materials. Each technique is applied to National Institute of Standards and Technology standard reference materials, SRM 1577a (Bovine Liver) and SRM 2704 (Buffalo River Sediment), and the United States Geological Survey standard BHVO-1 (Hawaiian Basalt Rock). Identical (but not optimum) irradiation, decay, and counting schemes are employed with each technique to provide a basis for comparison and to determine sensitivities in a routine irradiation scheme. Fifty-one elements are used in this comparison; however, several elements are not detected in the reference materials due to rigid analytical conditions (e.g. insufficient length of irradiation or activity for radioisotope of interest decaying below the lower limit of detection before counting interval). Most elements are normally distributed around certified or consensus values with a standard deviation of 10%. For some elements, discrepancies are observed and discussed. The accuracy, precision, and sensitivity of each technique are discussed by comparing the analytical results to consensus values for the Hawaiian Basalt Rock to demonstrate the diversity of multielement applications. (author) 4 refs.; 2 tabs

  19. Large scale applicability of a Fully Adaptive Non-Intrusive Spectral Projection technique: Sensitivity and uncertainty analysis of a transient

    International Nuclear Information System (INIS)

    Perkó, Zoltán; Lathouwers, Danny; Kloosterman, Jan Leen; Hagen, Tim van der

    2014-01-01

    Highlights: • Grid and basis adaptive Polynomial Chaos techniques are presented for S and U analysis. • Dimensionality reduction and incremental polynomial order reduce computational costs. • An unprotected loss of flow transient is investigated in a Gas Cooled Fast Reactor. • S and U analysis is performed with MC and adaptive PC methods, for 42 input parameters. • PC accurately estimates means, variances, PDFs, sensitivities and uncertainties. - Abstract: Since the early years of reactor physics the most prominent sensitivity and uncertainty (S and U) analysis methods in the nuclear community have been adjoint based techniques. While these are very effective for pure neutronics problems due to the linearity of the transport equation, they become complicated when coupled non-linear systems are involved. With the continuous increase in computational power such complicated multi-physics problems are becoming progressively tractable, hence affordable and easily applicable S and U analysis tools also have to be developed in parallel. For reactor physics problems for which adjoint methods are prohibitive Polynomial Chaos (PC) techniques offer an attractive alternative to traditional random sampling based approaches. At TU Delft such PC methods have been studied for a number of years and this paper presents a large scale application of our Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm for performing the sensitivity and uncertainty analysis of a Gas Cooled Fast Reactor (GFR) Unprotected Loss Of Flow (ULOF) transient. The transient was simulated using the Cathare 2 code system and a fully detailed model of the GFR2400 reactor design that was investigated in the European FP7 GoFastR project. Several sources of uncertainty were taken into account amounting to an unusually high number of stochastic input parameters (42) and numerous output quantities were investigated. The results show consistently good performance of the applied adaptive PC

  20. Large scale reflood test

    International Nuclear Information System (INIS)

    Hirano, Kemmei; Murao, Yoshio

    1980-01-01

    The large-scale reflood test with a view to ensuring the safety of light water reactors was started in fiscal 1976 based on the special account act for power source development promotion measures by the entrustment from the Science and Technology Agency. Thereafter, to establish the safety of PWRs in loss-of-coolant accidents by joint international efforts, the Japan-West Germany-U.S. research cooperation program was started in April, 1980. Thereupon, the large-scale reflood test is now included in this program. It consists of two tests using a cylindrical core testing apparatus for examining the overall system effect and a plate core testing apparatus for testing individual effects. Each apparatus is composed of the mock-ups of pressure vessel, primary loop, containment vessel and ECCS. The testing method, the test results and the research cooperation program are described. (J.P.N.)

  1. Percolation Analysis as a Tool to Describe the Topology of the Large Scale Structure of the Universe

    Science.gov (United States)

    Yess, Capp D.

    1997-09-01

    Percolation analysis is the study of the properties of clusters. In cosmology, it is the statistics of the size and number of clusters. This thesis presents a refinement of percolation analysis and its application to astronomical data. An overview of the standard model of the universe and the development of large scale structure is presented in order to place the study in historical and scientific context. Then using percolation statistics we, for the first time, demonstrate the universal character of a network pattern in the real space, mass distributions resulting from nonlinear gravitational instability of initial Gaussian fluctuations. We also find that the maximum of the number of clusters statistic in the evolved, nonlinear distributions is determined by the effective slope of the power spectrum. Next, we present percolation analyses of Wiener Reconstructions of the IRAS 1.2 Jy Redshift Survey. There are ten reconstructions of galaxy density fields in real space spanning the range β = 0.1 to 1.0, where β=Ω0.6/b,/ Ω is the present dimensionless density and b is the linear bias factor. Our method uses the growth of the largest cluster statistic to characterize the topology of a density field, where Gaussian randomized versions of the reconstructions are used as standards for analysis. For the reconstruction volume of radius, R≈100h-1 Mpc, percolation analysis reveals a slight 'meatball' topology for the real space, galaxy distribution of the IRAS survey. Finally, we employ a percolation technique developed for pointwise distributions to analyze two-dimensional projections of the three northern and three southern slices in the Las Campanas Redshift Survey and then give consideration to further study of the methodology, errors and application of percolation. We track the growth of the largest cluster as a topological indicator to a depth of 400 h-1 Mpc, and report an unambiguous signal, with high signal-to-noise ratio, indicating a network topology which in

  2. Large-scale expression analysis reveals distinct microRNA profiles at different stages of human neurodevelopment.

    Directory of Open Access Journals (Sweden)

    Brandon Smith

    Full Text Available BACKGROUND: MicroRNAs (miRNAs are short non-coding RNAs predicted to regulate one third of protein coding genes via mRNA targeting. In conjunction with key transcription factors, such as the repressor REST (RE1 silencing transcription factor, miRNAs play crucial roles in neurogenesis, which requires a highly orchestrated program of gene expression to ensure the appropriate development and function of diverse neural cell types. Whilst previous studies have highlighted select groups of miRNAs during neural development, there remains a need for amenable models in which miRNA expression and function can be analyzed over the duration of neurogenesis. PRINCIPAL FINDINGS: We performed large-scale expression profiling of miRNAs in human NTera2/D1 (NT2 cells during retinoic acid (RA-induced transition from progenitors to fully differentiated neural phenotypes. Our results revealed dynamic changes of miRNA patterns, resulting in distinct miRNA subsets that could be linked to specific neurodevelopmental stages. Moreover, the cell-type specific miRNA subsets were very similar in NT2-derived differentiated cells and human primary neurons and astrocytes. Further analysis identified miRNAs as putative regulators of REST, as well as candidate miRNAs targeted by REST. Finally, we confirmed the existence of two predicted miRNAs; pred-MIR191 and pred-MIR222 associated with SLAIN1 and FOXP2, respectively, and provided some evidence of their potential co-regulation. CONCLUSIONS: In the present study, we demonstrate that regulation of miRNAs occurs in precise patterns indicative of their roles in cell fate commitment, progenitor expansion and differentiation into neurons and glia. Furthermore, the similarity between our NT2 system and primary human cells suggests their roles in molecular pathways critical for human in vivo neurogenesis.

  3. Large-scale lysimeter site St. Arnold, Germany: analysis of 40 years of precipitation, leachate and evapotranspiration

    Directory of Open Access Journals (Sweden)

    N. Harsch

    2009-03-01

    Full Text Available This study deals with a lysimetrical-meteorological data series collected on the large-scale lysimeter site "St. Arnold", Germany, from November 1965 to April 2007. The particular relevance of this data rests both upon its perdurability and upon the fact that the site is comprised of a grassland basin, an oak/beech and a pine basin.

    Apart from analyzing long term trends of the meteorological measurements, the primary objective of this study is to investigate the water balance in grassland and forested basins, in particular comparing the precipitation term to leachate quantities and potential and actual evapotranspiration. The latter are based upon the Penman and the Penman-Monteith approaches, respectively.

    The main results of this survey are that, on a long-term average, the grassland basin turns more than half (53% of its annually incoming precipitation into leachate and only 36% into water vapour, while the deciduous forest exhibits a ratio of 37% for leachate and 56% for evapotranspiration, and the evergreen coniferous forest shows the highest evaporation rate (65% and the lowest leachate rate (26%.

    Concerning these water balances, considerable differences both between basins and between seasons stand out. While summer periods exhibit high evapotranspiration rates for the forests and moderate ones for the grassland, winter periods are characterised by considerable leachate quantities for grassland and the deciduous forest and moderate ones for the coniferous forest. Following the analysis of the climatic development in St. Arnold, trends towards a milder and more humid regional climate were detected.

  4. Magma viscosity estimation based on analysis of erupted products. Potential assessment for large-scale pyroclastic eruptions

    International Nuclear Information System (INIS)

    Takeuchi, Shingo

    2010-01-01

    After the formulation of guidelines for volcanic hazards in site evaluation for nuclear installations (e.g. JEAG4625-2009), it is required to establish appropriate methods to assess potential of large-scale pyroclastic eruptions at long-dormant volcanoes, which is one of the most hazardous volcanic phenomena on the safety of the installations. In considering the volcanic dormancy, magma eruptability is an important concept. The magma eruptability is dominantly controlled by magma viscosity, which can be estimated from petrological analysis of erupted materials. Therefore, viscosity estimation of magmas erupted in past eruptions should provide important information to assess future activities at hazardous volcanoes. In order to show the importance of magma viscosity in the concept of magma eruptability, this report overviews dike propagation processes from a magma chamber and nature of magma viscosity. Magma viscosity at pre-eruptive conditions of magma chambers were compiled based on previous petrological studies on past eruptions in Japan. There are only 16 examples of eruptions at 9 volcanoes satisfying data requirement for magma viscosity estimation. Estimated magma viscosities range from 10 2 to 10 7 Pa·s for basaltic to rhyolitic magmas. Most of examples fall below dike propagation limit of magma viscosity (ca. 10 6 Pa·s) estimated based on a dike propagation model. Highly viscous magmas (ca. 10 7 Pa·s) than the dike propagation limit are considered to lose eruptability which is the ability to form dikes and initiate eruptions. However, in some cases, small precursory eruptions of less viscous magmas commonly occurred just before climactic eruptions of the highly viscous magmas, suggesting that the precursory dike propagation by the less viscous magmas induced the following eruptions of highly viscous magmas (ca. 10 7 Pa·s). (author)

  5. Large-scale analysis of acute ethanol exposure in zebrafish development: a critical time window and resilience.

    Directory of Open Access Journals (Sweden)

    Shaukat Ali

    Full Text Available BACKGROUND: In humans, ethanol exposure during pregnancy causes a spectrum of developmental defects (fetal alcohol syndrome or FAS. Individuals vary in phenotypic expression. Zebrafish embryos develop FAS-like features after ethanol exposure. In this study, we ask whether stage-specific effects of ethanol can be identified in the zebrafish, and if so, whether they allow the pinpointing of sensitive developmental mechanisms. We have therefore conducted the first large-scale (>1500 embryos analysis of acute, stage-specific drug effects on zebrafish development, with a large panel of readouts. METHODOLOGY/PRINCIPAL FINDINGS: Zebrafish embryos were raised in 96-well plates. Range-finding indicated that 10% ethanol for 1 h was suitable for an acute exposure regime. High-resolution magic-angle spinning proton magnetic resonance spectroscopy showed that this produced a transient pulse of 0.86% concentration of ethanol in the embryo within the chorion. Survivors at 5 days postfertilisation were analysed. Phenotypes ranged from normal (resilient to severely malformed. Ethanol exposure at early stages caused high mortality (≥88%. At later stages of exposure, mortality declined and malformations developed. Pharyngeal arch hypoplasia and behavioral impairment were most common after prim-6 and prim-16 exposure. By contrast, microphthalmia and growth retardation were stage-independent. CONCLUSIONS: Our findings show that some ethanol effects are strongly stage-dependent. The phenotypes mimic key aspects of FAS including craniofacial abnormality, microphthalmia, growth retardation and behavioral impairment. We also identify a critical time window (prim-6 and prim-16 for ethanol sensitivity. Finally, our identification of a wide phenotypic spectrum is reminiscent of human FAS, and may provide a useful model for studying disease resilience.

  6. The resource curse: Analysis of the applicability to the large-scale export of electricity from renewable resources

    International Nuclear Information System (INIS)

    Eisgruber, Lasse

    2013-01-01

    The “resource curse” has been analyzed extensively in the context of non-renewable resources such as oil and gas. More recently commentators have expressed concerns that also renewable electricity exports can have adverse economic impacts on exporting countries. My paper analyzes to what extent the resource curse applies in the case of large-scale renewable electricity exports. I develop a “comprehensive model” that integrates previous works and provides a consolidated view of how non-renewable resource abundance impacts economic growth. Deploying this model I analyze through case studies on Laos, Mongolia, and the MENA region to what extent exporters of renewable electricity run into the danger of the resource curse. I find that renewable electricity exports avoid some disadvantages of non-renewable resource exports including (i) shocks after resource depletion; (ii) macroeconomic fluctuations; and (iii) competition for a fixed amount of resources. Nevertheless, renewable electricity exports bear some of the same risks as conventional resource exports including (i) crowding-out of the manufacturing sector; (ii) incentives for corruption; and (iii) reduced government accountability. I conclude with recommendations for managing such risks. - Highlights: ► Study analyzes whether the resource curse applies to renewable electricity export. ► I develop a “comprehensive model of the resource curse” and use cases for the analysis. ► Renewable electricity export avoids some disadvantages compared to other resources. ► Renewable electricity bears some of the same risks as conventional resources. ► Study concludes with recommendations for managing such risks

  7. Max-Min SINR in Large-Scale Single-Cell MU-MIMO: Asymptotic Analysis and Low Complexity Transceivers

    KAUST Repository

    Sifaou, Houssem; Kammoun, Abla; Sanguinetti, Luca; Debbah, Merouane; Alouini, Mohamed-Slim

    2016-01-01

    This work focuses on the downlink and uplink of large-scale single-cell MU-MIMO systems in which the base station (BS) endowed with M antennas communicates with K single-antenna user equipments (UEs). Particularly, we aim at reducing the complexity

  8. A climatological analysis of high-precipitation events in Dronning Maud Land, Antarctica, and associated large-scale atmospheric conditions

    NARCIS (Netherlands)

    Welker, Christoph; Martius, Olivia; Froidevaux, Paul; Reijmer, Carleen H.; Fischer, Hubertus

    2014-01-01

    The link between high precipitation in Dronning Maud Land (DML), Antarctica, and the large-scale atmospheric circulation is investigated using ERA-Interim data for 1979-2009. High-precipitation events are analyzed at Halvfarryggen situated in the coastal region of DML and at Kohnen Station located

  9. Large-Scale Gene-Centric Meta-Analysis across 39 Studies Identifies Type 2 Diabetes Loci

    NARCIS (Netherlands)

    Saxena, Richa; Elbers, Clara C.; Guo, Yiran; Peter, Inga; Gaunt, Tom R.; Mega, Jessica L.; Lanktree, Matthew B.; Tare, Archana; Almoguera Castillo, Berta; Li, Yun R.; Johnson, Toby; Bruinenberg, Marcel; Gilbert-Diamond, Diane; Rajagopalan, Ramakrishnan; Voight, Benjamin F.; Balasubramanyam, Ashok; Barnard, John; Bauer, Florianne; Baumert, Jens; Bhangale, Tushar; Boehm, Bernhard O.; Braund, Peter S.; Burton, Paul R.; Chandrupatla, Hareesh R.; Clarke, Robert; Cooper-DeHoff, Rhonda M.; Crook, Errol D.; Davey-Smith, George; Day, Ian N.; de Boer, Anthonius; de Groot, Mark C. H.; Drenos, Fotios; Ferguson, Jane; Fox, Caroline S.; Furlong, Clement E.; Gibson, Quince; Gieger, Christian; Gilhuijs-Pederson, Lisa A.; Glessner, Joseph T.; Goel, Anuj; Gong, Yan; Grant, Struan F. A.; Kumari, Meena; van der Harst, Pim; van Vliet-Ostaptchouk, Jana V.; Verweij, Niek; Wolffenbuttel, Bruce H. R.; Hofker, Marten H.; Asselbergs, Folkert W.; Wijmenga, Cisca

    2012-01-01

    To identify genetic factors contributing to type 2 diabetes (T2D), we performed large-scale meta-analyses by using a custom similar to 50,000 SNP genotyping array (the ITMAT-Broad-CARe array) with similar to 2000 candidate genes in 39 multiethnic population-based studies, case-control studies, and

  10. Development of the Large-Scale Statistical Analysis System of Satellites Observations Data with Grid Datafarm Architecture

    Science.gov (United States)

    Yamamoto, K.; Murata, K.; Kimura, E.; Honda, R.

    2006-12-01

    In the Solar-Terrestrial Physics (STP) field, the amount of satellite observation data has been increasing every year. It is necessary to solve the following three problems to achieve large-scale statistical analyses of plenty of data. (i) More CPU power and larger memory and disk size are required. However, total powers of personal computers are not enough to analyze such amount of data. Super-computers provide a high performance CPU and rich memory area, but they are usually separated from the Internet or connected only for the purpose of programming or data file transfer. (ii) Most of the observation data files are managed at distributed data sites over the Internet. Users have to know where the data files are located. (iii) Since no common data format in the STP field is available now, users have to prepare reading program for each data by themselves. To overcome the problems (i) and (ii), we constructed a parallel and distributed data analysis environment based on the Gfarm reference implementation of the Grid Datafarm architecture. The Gfarm shares both computational resources and perform parallel distributed processings. In addition, the Gfarm provides the Gfarm filesystem which can be as virtual directory tree among nodes. The Gfarm environment is composed of three parts; a metadata server to manage distributed files information, filesystem nodes to provide computational resources and a client to throw a job into metadata server and manages data processing schedulings. In the present study, both data files and data processes are parallelized on the Gfarm with 6 file system nodes: CPU clock frequency of each node is Pentium V 1GHz, 256MB memory and40GB disk. To evaluate performances of the present Gfarm system, we scanned plenty of data files, the size of which is about 300MB for each, in three processing methods: sequential processing in one node, sequential processing by each node and parallel processing by each node. As a result, in comparison between the

  11. Understanding Business Interests in International Large-Scale Student Assessments: A Media Analysis of "The Economist," "Financial Times," and "Wall Street Journal"

    Science.gov (United States)

    Steiner-Khamsi, Gita; Appleton, Margaret; Vellani, Shezleen

    2018-01-01

    The media analysis is situated in the larger body of studies that explore the varied reasons why different policy actors advocate for international large-scale student assessments (ILSAs) and adds to the research on the fast advance of the global education industry. The analysis of "The Economist," "Financial Times," and…

  12. IMPROVED LARGE-SCALE SLOPE ANALYSIS ON MARS BASED ON CORRELATION OF SLOPES DERIVED WITH DIFFERENT BASELINES

    Directory of Open Access Journals (Sweden)

    Y. Wang

    2017-07-01

    Full Text Available The surface slopes of planetary bodies are important factors for exploration missions, such as landing site selection and rover manoeuvre. Generally, high-resolution digital elevation models (DEMs such as those generated from the HiRISE images on Mars are preferred to generate detailed slopes with a better fidelity of terrain features. Unfortunately, high-resolution datasets normally only cover small area and are not always available. While lower resolution datasets, such as MOLA, provide global coverage of the Martian surface. Slopes generated from the low-resolution DEM will be based on a large baseline and be smoothed from the real situation. In order to carry out slope analysis at large scale on Martian surface based low-resolution data such as MOLA data, while alleviating the smoothness problem of slopes due to its low resolution, this paper presents an amplifying function of slopes derived from low-resolution DEMs based on the relationships between DEM resolutions and slopes. First, slope maps are derived from the HiRISE DEM (meter-level resolution DEM generated from HiRISE images and a series of down-sampled HiRISE DEMs. The latter are used to simulate low-resolution DEMs. Then the high-resolution slope map is down- sampled to the same resolution with the slope map from the lower-resolution DEMs. Thus, a comparison can be conducted pixel-wise. For each pixel on the slope map derived from the lower-resolution DEM, it can reach the same value with the down-sampled HiRISE slope by multiplying an amplifying factor. Seven sets of HiRISE images with representative terrain types are used for correlation analysis. It shows that the relationship between the amplifying factors and the original MOLA slopes can be described by the exponential function. Verifications using other datasets show that after applying the proposed amplifying function, the updated slope maps give better representations of slopes on Martian surface compared with the original

  13. Extraction of relations between genes and diseases from text and large-scale data analysis: implications for translational research.

    Science.gov (United States)

    Bravo, Àlex; Piñero, Janet; Queralt-Rosinach, Núria; Rautschka, Michael; Furlong, Laura I

    2015-02-21

    Current biomedical research needs to leverage and exploit the large amount of information reported in scientific publications. Automated text mining approaches, in particular those aimed at finding relationships between entities, are key for identification of actionable knowledge from free text repositories. We present the BeFree system aimed at identifying relationships between biomedical entities with a special focus on genes and their associated diseases. By exploiting morpho-syntactic information of the text, BeFree is able to identify gene-disease, drug-disease and drug-target associations with state-of-the-art performance. The application of BeFree to real-case scenarios shows its effectiveness in extracting information relevant for translational research. We show the value of the gene-disease associations extracted by BeFree through a number of analyses and integration with other data sources. BeFree succeeds in identifying genes associated to a major cause of morbidity worldwide, depression, which are not present in other public resources. Moreover, large-scale extraction and analysis of gene-disease associations, and integration with current biomedical knowledge, provided interesting insights on the kind of information that can be found in the literature, and raised challenges regarding data prioritization and curation. We found that only a small proportion of the gene-disease associations discovered by using BeFree is collected in expert-curated databases. Thus, there is a pressing need to find alternative strategies to manual curation, in order to review, prioritize and curate text-mining data and incorporate it into domain-specific databases. We present our strategy for data prioritization and discuss its implications for supporting biomedical research and applications. BeFree is a novel text mining system that performs competitively for the identification of gene-disease, drug-disease and drug-target associations. Our analyses show that mining only a

  14. Large-scale educational telecommunications systems for the US: An analysis of educational needs and technological opportunities

    Science.gov (United States)

    Morgan, R. P.; Singh, J. P.; Rothenberg, D.; Robinson, B. E.

    1975-01-01

    The needs to be served, the subsectors in which the system might be used, the technology employed, and the prospects for future utilization of an educational telecommunications delivery system are described and analyzed. Educational subsectors are analyzed with emphasis on the current status and trends within each subsector. Issues which affect future development, and prospects for future use of media, technology, and large-scale electronic delivery within each subsector are included. Information on technology utilization is presented. Educational telecommunications services are identified and grouped into categories: public television and radio, instructional television, computer aided instruction, computer resource sharing, and information resource sharing. Technology based services, their current utilization, and factors which affect future development are stressed. The role of communications satellites in providing these services is discussed. Efforts to analyze and estimate future utilization of large-scale educational telecommunications are summarized. Factors which affect future utilization are identified. Conclusions are presented.

  15. Towards Development of Clustering Applications for Large-Scale Comparative Genotyping and Kinship Analysis Using Y-Short Tandem Repeats.

    Science.gov (United States)

    Seman, Ali; Sapawi, Azizian Mohd; Salleh, Mohd Zaki

    2015-06-01

    Y-chromosome short tandem repeats (Y-STRs) are genetic markers with practical applications in human identification. However, where mass identification is required (e.g., in the aftermath of disasters with significant fatalities), the efficiency of the process could be improved with new statistical approaches. Clustering applications are relatively new tools for large-scale comparative genotyping, and the k-Approximate Modal Haplotype (k-AMH), an efficient algorithm for clustering large-scale Y-STR data, represents a promising method for developing these tools. In this study we improved the k-AMH and produced three new algorithms: the Nk-AMH I (including a new initial cluster center selection), the Nk-AMH II (including a new dominant weighting value), and the Nk-AMH III (combining I and II). The Nk-AMH III was the superior algorithm, with mean clustering accuracy that increased in four out of six datasets and remained at 100% in the other two. Additionally, the Nk-AMH III achieved a 2% higher overall mean clustering accuracy score than the k-AMH, as well as optimal accuracy for all datasets (0.84-1.00). With inclusion of the two new methods, the Nk-AMH III produced an optimal solution for clustering Y-STR data; thus, the algorithm has potential for further development towards fully automatic clustering of any large-scale genotypic data.

  16. Development and analysis of prognostic equations for mesoscale kinetic energy and mesoscale (subgrid scale) fluxes for large-scale atmospheric models

    Science.gov (United States)

    Avissar, Roni; Chen, Fei

    1993-01-01

    Generated by landscape discontinuities (e.g., sea breezes) mesoscale circulation processes are not represented in large-scale atmospheric models (e.g., general circulation models), which have an inappropiate grid-scale resolution. With the assumption that atmospheric variables can be separated into large scale, mesoscale, and turbulent scale, a set of prognostic equations applicable in large-scale atmospheric models for momentum, temperature, moisture, and any other gaseous or aerosol material, which includes both mesoscale and turbulent fluxes is developed. Prognostic equations are also developed for these mesoscale fluxes, which indicate a closure problem and, therefore, require a parameterization. For this purpose, the mean mesoscale kinetic energy (MKE) per unit of mass is used, defined as E-tilde = 0.5 (the mean value of u'(sub i exp 2), where u'(sub i) represents the three Cartesian components of a mesoscale circulation (the angle bracket symbol is the grid-scale, horizontal averaging operator in the large-scale model, and a tilde indicates a corresponding large-scale mean value). A prognostic equation is developed for E-tilde, and an analysis of the different terms of this equation indicates that the mesoscale vertical heat flux, the mesoscale pressure correlation, and the interaction between turbulence and mesoscale perturbations are the major terms that affect the time tendency of E-tilde. A-state-of-the-art mesoscale atmospheric model is used to investigate the relationship between MKE, landscape discontinuities (as characterized by the spatial distribution of heat fluxes at the earth's surface), and mesoscale sensible and latent heat fluxes in the atmosphere. MKE is compared with turbulence kinetic energy to illustrate the importance of mesoscale processes as compared to turbulent processes. This analysis emphasizes the potential use of MKE to bridge between landscape discontinuities and mesoscale fluxes and, therefore, to parameterize mesoscale fluxes

  17. A Large-Scale Genetic Analysis Reveals a Strong Contribution of the HLA Class II Region to Giant Cell Arteritis Susceptibility

    NARCIS (Netherlands)

    David Carmona, F.; Mackie, Sarah L.; Martin, Jose-Ezequiel; Taylor, John C.; Vaglio, Augusto; Eyre, Stephen; Bossini-Castillo, Lara; Castaneda, Santos; Cid, Maria C.; Hernandez-Rodriguez, Jose; Prieto-Gonzalez, Sergio; Solans, Roser; Ramentol-Sintas, Marc; Francisca Gonzalez-Escribano, M.; Ortiz-Fernandez, Lourdes; Morado, Inmaculada C.; Narvaez, Javier; Miranda-Filloy, Jose A.; Beretta, Lorenzo; Lunardi, Claudio; Cimmino, Marco A.; Gianfreda, Davide; Santilli, Daniele; Ramirez, Giuseppe A.; Soriano, Alessandra; Muratore, Francesco; Pazzola, Giulia; Addimanda, Olga; Wijmenga, Cisca; Witte, Torsten; Schirmer, Jan H.; Moosig, Frank; Schoenau, Verena; Franke, Andre; Palm, Oyvind; Molberg, Oyvind; Diamantopoulos, Andreas P.; Carette, Simon; Cuthbertson, David; Forbess, Lindsy J.; Hoffman, Gary S.; Khalidi, Nader A.; Koening, Curry L.; Langford, Carol A.; McAlear, Carol A.; Moreland, Larry; Monach, Paul A.; Pagnoux, Christian; Seo, Philip; Spiera, Robert; Sreih, Antoine G.; Warrington, Kenneth J.; Ytterberg, Steven R.; Gregersen, Peter K.; Pease, Colin T.; Gough, Andrew; Green, Michael; Hordon, Lesley; Jarrett, Stephen; Watts, Richard; Levy, Sarah; Patel, Yusuf; Kamath, Sanjeet; Dasgupta, Bhaskar; Worthington, Jane; Koeleman, Bobby P. C.; de Bakker, Paul I. W.; Barrett, Jennifer H.; Salvarani, Carlo; Merkel, Peter A.; Gonzalez-Gay, Miguel A.; Morgan, Ann W.; Martin, Javier

    2015-01-01

    We conducted a large-scale genetic analysis on giant cell arteritis (GCA), a polygenic immune-mediated vasculitis. A case-control cohort, comprising 1,651 case subjects with GCA and 15,306 unrelated control subjects from six different countries of European ancestry, was genotyped by the Immunochip

  18. Multilevel Latent Class Analysis for Large-Scale Educational Assessment Data: Exploring the Relation between the Curriculum and Students' Mathematical Strategies

    Science.gov (United States)

    Fagginger Auer, Marije F.; Hickendorff, Marian; Van Putten, Cornelis M.; Béguin, Anton A.; Heiser, Willem J.

    2016-01-01

    A first application of multilevel latent class analysis (MLCA) to educational large-scale assessment data is demonstrated. This statistical technique addresses several of the challenges that assessment data offers. Importantly, MLCA allows modeling of the often ignored teacher effects and of the joint influence of teacher and student variables.…

  19. Large Scale Solar Heating

    DEFF Research Database (Denmark)

    Heller, Alfred

    2001-01-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out...... model is designed and validated on the Marstal case. Applying the Danish Reference Year, a design tool is presented. The simulation tool is used for proposals for application of alternative designs, including high-performance solar collector types (trough solar collectors, vaccum pipe collectors......). Simulation programs are proposed as control supporting tool for daily operation and performance prediction of central solar heating plants. Finaly the CSHP technolgy is put into persepctive with respect to alternatives and a short discussion on the barries and breakthrough of the technology are given....

  20. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  1. Large-scale, multi-compartment tests in PANDA for LWR-containment analysis and code validation

    International Nuclear Information System (INIS)

    Paladino, Domenico; Auban, Olivier; Zboray, Robert

    2006-01-01

    The large-scale thermal-hydraulic PANDA facility has been used for the last years for investigating passive decay heat removal systems and related containment phenomena relevant for next-generation and current light water reactors. As part of the 5. EURATOM framework program project TEMPEST, a series of tests was performed in PANDA to experimentally investigate the distribution of hydrogen inside the containment and its effect on the performance of the Passive Containment Cooling System (PCCS) designed for the Economic Simplified Boiling Water Reactor (ESBWR). In a postulated severe accident, a large amount of hydrogen could be released in the Reactor Pressure Vessel (RPV) as a consequence of the cladding Metal- Water (M-W) reaction and discharged together with steam to the Drywell (DW) compartment. In PANDA tests, hydrogen was simulated by using helium. This paper illustrates the results of a TEMPEST test performed in PANDA and named as Test T1.2. In Test T1.2, the gas stratification (steam-helium) patterns forming in the large-scale multi-compartment PANDA DW, and the effect of non-condensable gas (helium) on the overall behaviour of the PCCS were identified. Gas mixing and stratification in a large-scale multi-compartment system are currently being further investigated in PANDA in the frame of the OECD project SETH. The testing philosophy in this new PANDA program is to produce data for code validation in relation to specific phenomena, such as: gas stratification in the containment, gas transport between containment compartments, wall condensation, etc. These types of phenomena are driven by buoyant high-momentum injections (jets) and/or low momentum injection (plumes), depending on the transient scenario. In this context, the new SETH tests in PANDA are particularly valuable to produce an experimental database for code assessment. This paper also presents an overview of the PANDA SETH tests and the major improvements in instrumentation carried out in the PANDA

  2. Multi-element analysis of crude-oil samples by 14.6 MeV neutron activation

    International Nuclear Information System (INIS)

    Cam, N.F.; Cigeroglu, F.; Erduran, M.N.

    1997-01-01

    The instrumental neutron activation technique, using the SAMEST T-400 neutron generator with 14.6 MeV neutrons produced from 3 H(d,n) 4 He reaction, is demonstrated for multi-element analysis of Saudi-Arabian crude-oil samples. The system parameters for the absolute method (e.g., the counting solid-angle, intrinsic efficiency of the γ-ray detector, effective neutron flux, activation cross sections, etc.)were determined and the results of elemental concentrations were presented with the corrections for all possible interferences having been carefully considered. (author)

  3. Breaking Computational Barriers: Real-time Analysis and Optimization with Large-scale Nonlinear Models via Model Reduction

    Energy Technology Data Exchange (ETDEWEB)

    Carlberg, Kevin Thomas [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Drohmann, Martin [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Tuminaro, Raymond S. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Computational Mathematics; Boggs, Paul T. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; van Bloemen Waanders, Bart Gustaaf [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Optimization and Uncertainty Estimation

    2014-10-01

    Model reduction for dynamical systems is a promising approach for reducing the computational cost of large-scale physics-based simulations to enable high-fidelity models to be used in many- query (e.g., Bayesian inference) and near-real-time (e.g., fast-turnaround simulation) contexts. While model reduction works well for specialized problems such as linear time-invariant systems, it is much more difficult to obtain accurate, stable, and efficient reduced-order models (ROMs) for systems with general nonlinearities. This report describes several advances that enable nonlinear reduced-order models (ROMs) to be deployed in a variety of time-critical settings. First, we present an error bound for the Gauss-Newton with Approximated Tensors (GNAT) nonlinear model reduction technique. This bound allows the state-space error for the GNAT method to be quantified when applied with the backward Euler time-integration scheme. Second, we present a methodology for preserving classical Lagrangian structure in nonlinear model reduction. This technique guarantees that important properties--such as energy conservation and symplectic time-evolution maps--are preserved when performing model reduction for models described by a Lagrangian formalism (e.g., molecular dynamics, structural dynamics). Third, we present a novel technique for decreasing the temporal complexity --defined as the number of Newton-like iterations performed over the course of the simulation--by exploiting time-domain data. Fourth, we describe a novel method for refining projection-based reduced-order models a posteriori using a goal-oriented framework similar to mesh-adaptive h -refinement in finite elements. The technique allows the ROM to generate arbitrarily accurate solutions, thereby providing the ROM with a 'failsafe' mechanism in the event of insufficient training data. Finally, we present the reduced-order model error surrogate (ROMES) method for statistically quantifying reduced- order

  4. Multielement analysis of reagents used in chemical identification of transuranic elements

    International Nuclear Information System (INIS)

    Montalvan Estrada, A.; Brigido Flores, O.; Maslov, O.D.; Dmitriev, S.N.

    2006-01-01

    For more than 40 years, chemical identification of transuranic elements has been used at the Laboratory of Nuclear Reactions of the Join Institute for Nuclear Research, Dubna, Russia, as a secondary method of identification. Chlorination of transuranic elements obtained by nuclear reactions is an important step of the procedure in order to obtain volatile compounds able to pass through a thermo chromatographic process. To access the quality of the reagents TiCl 4 and SOCl 2 multielement analysis was carried out using both X-rays fluorescence and gamma activation. It was followed the simplest procedure for reagents samples pretreatment, so further interferences from other chemical products were avoided. X-rays fluorescence analysis was performed in a spectrometer with Si(Li) detector with a resolution for Fe (K?) of 190 eV. Both Cd-109 and Am-241 were used as isotopic sources of excitation. Gamma activation analysis was carried out using the compact electron accelerator MT-25, where gamma rays are produced in a stopping target. Among the parameters of the MT-25 are the following: energy range-10-25 MeV, gamma-ray flux-10 14 photon/s, power consumption-20 kw. Measurements of the induced activity were performed with the help of a HPGe detector, thin and coaxial Ge(Li) detectors. There were identified two elements in SOCl 2 -Nickel (3*10 -6 g/g) and Antimony (2*10 -7 g/g), while there were identified three elements in TiCl 4 - Zirconium (8*10 -7 g/g), Arsenic (9*10 -7 g/g) and Antimony (5*10 -7 g/g). Only five elements were detected in trace concentrations in the two analyzed reagents, that is for more than 57 elements capable of being detected using gamma activation analysis with the MT-25 only 5 had concentrations above the detection limits of the method. Not being chemical analogs of the synthesized transuranic elements (Z-104 and 106) and not being able to alpha or fission disintegrations there is not expected any interference from them in the chemical

  5. Determination of trace metals in coastal seawater around Okinawa and its multielement profiling analysis

    International Nuclear Information System (INIS)

    Itoh, Akihide; Ishigaki, Teruyuki; Arakaki, Teruo; Yamada, Ayako; Yamaguchi, Mami; Kabe, Noriko

    2009-01-01

    In the present study, trace metals in coastal surface seawater around Okinawa were determined by inductively coupled plasma mass spectrometry (ICP-MS) with chelating disk preconcentration. As a result, the concentrations of V, Mn, Co, Ni, Cu, Zn, Mo, Cd, Pb, and U were obtained in the range from 10 μgL -1 to 0.001 μgL -1 for 6 samples. In addition, multielement profiling analyses were carried out using analytical values obtained in order to elucidate the features of trace metals in each coastal sea area. For coastal surface seawater near an urban area, the analytical values for Zn, Cu, Mn, and Pb were higher by more than 10-fold the literature values for open-surface seawater, and those of Cd were also relatively high. Such a trend concerning the multi-element profile was almost similar to the literature values for coastal seawater of the main island of Japan. On the other hand, the analytical values of most elements for coastal surface seawater near a suburb area were in the range from 0.5 to 5 fold, compared to the literature values for open surface seawater. From multielement profiling analyses for nutrient type elements in marine chemistry, it was suggested that the concentrations of Zn and Cd in a coral sea area normalized to literature values for open surface-seawater were higher than those of Ni and Cu. (author)

  6. Static analysis: from theory to practice; Static analysis of large-scale embedded code, generation of abstract domains

    International Nuclear Information System (INIS)

    Monniaux, D.

    2009-06-01

    Software operating critical systems (aircraft, nuclear power plants) should not fail - whereas most computerised systems of daily life (personal computer, ticket vending machines, cell phone) fail from time to time. This is not a simple engineering problem: it is known, since the works of Turing and Cook, that proving that programs work correctly is intrinsically hard. In order to solve this problem, one needs methods that are, at the same time, efficient (moderate costs in time and memory), safe (all possible failures should be found), and precise (few warnings about nonexistent failures). In order to reach a satisfactory compromise between these goals, one can research fields as diverse as formal logic, numerical analysis or 'classical' algorithmics. From 2002 to 2007 I participated in the development of the Astree static analyser. This suggested to me a number of side projects, both theoretical and practical (use of formal proof techniques, analysis of numerical filters...). More recently, I became interested in modular analysis of numerical property and in the applications to program analysis of constraint solving techniques (semi-definite programming, SAT and SAT modulo theory). (author)

  7. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  8. Multielement suppressor nozzles for thrust augmentation systems.

    Science.gov (United States)

    Lawrence, R. L.; O'Keefe, J. V.; Tate, R. B.

    1972-01-01

    The noise reduction and nozzle performance characteristics of large-scale, high-aspect-ratio multielement nozzle arrays operated at low velocities were determined by test. The nozzles are selected for application to high-aspect-ratio augmentor suppressors to be used for augmentor wing airplanes. Significant improvements in noise characteristics for multielement nozzles over those of round or high-aspect-ratio slot nozzles are obtained. Elliptical noise patterns typical of slot nozzles are presented for high-aspect-ratio multielement nozzle arrays. Additional advantages are available in OASPL noise reduction from the element size and spacing. Augmentor-suppressor systems can be designed for maximum beam pattern directivity and frequency spectrum shaping advantages. Measurements of the nozzle wakes show a correlation with noise level data and frequency spectrum peaks. The noise and jet wake results are compared with existing prediction procedures based on empirical jet flow equations, Lighthill relationships, Strouhal number, and empirical shock-induced screech noise effects.

  9. Large-scale structure of the Taurus molecular complex. II. Analysis of velocity fluctuations and turbulence. III. Methods for turbulence

    International Nuclear Information System (INIS)

    Kleiner, S.C.; Dickman, R.L.

    1985-01-01

    The velocity autocorrelation function (ACF) of observed spectral line centroid fluctuations is noted to effectively reproduce the actual ACF of turbulent gas motions within an interstellar cloud, thereby furnishing a framework for the study of the large scale velocity structure of the Taurus dark cloud complex traced by the present C-13O J = 1-0 observations of this region. The results obtained are discussed in the context of recent suggestions that widely observed correlations between molecular cloud widths and cloud sizes indicate the presence of a continuum of turbulent motions within the dense interstellar medium. Attention is then given to a method for the quantitative study of these turbulent motions, involving the mapping of a source in an optically thin spectral line and studying the spatial correlation properties of the resulting velocity centroid map. 61 references

  10. Large-scale analysis of in Vivo phosphorylated membrane proteins by immobilized metal ion affinity chromatography and mass spectrometry

    DEFF Research Database (Denmark)

    Nühse, Thomas S; Stensballe, Allan; Jensen, Ole N

    2003-01-01

    specificity. We investigated the potential of IMAC in combination with capillary liquid chromatography coupled to tandem mass spectrometry for the identification of plasma membrane phosphoproteins of Arabidopsis. Without chemical modification of peptides, over 75% pure phosphopeptides were isolated from...... plasma membrane digests and detected and sequenced by mass spectrometry. We present a scheme for two-dimensional peptide separation using strong anion exchange chromatography prior to IMAC that both decreases the complexity of IMAC-purified phosphopeptides and yields a far greater coverage...... of monophosphorylated peptides. Among the identified sequences, six originated from different isoforms of the plasma membrane H(+)-ATPase and defined two previously unknown phosphorylation sites at the regulatory C terminus. The potential for large-scale identification of phosphorylation sites on plasma membrane...

  11. Large scale model experimental analysis of concrete containment of nuclear power plant strengthened with externally wrapped carbon fiber sheets

    International Nuclear Information System (INIS)

    Yang Tao; Chen Xiaobing; Yue Qingrui

    2005-01-01

    Concrete containment of Nuclear Power Station is the last shield structure in case of nuclear leakage during an accident. The experiment model in this paper is a 1/10 large-scale model of a real-sized prestressed reinforced concrete containment. The model containment was loaded by hydraulic pressure which simulated the design pressure during the accident. Hundreds of sensors and advanced data-collect systems were used in the test. The containment was first loaded to the damage pressure then strengthened with externally wrapping Carbon fiber sheet around the outer surface of containment structure. Experimental results indicate that CFRP system can greatly increase the capacity of concrete containment to endure the inner pressure. CFRP system can also effectively confine the deformation and the cracks caused by loading. (authors)

  12. A topological analysis of large-scale structure, studied using the CMASS sample of SDSS-III

    International Nuclear Information System (INIS)

    Parihar, Prachi; Gott, J. Richard III; Vogeley, Michael S.; Choi, Yun-Young; Kim, Juhan; Kim, Sungsoo S.; Speare, Robert; Brownstein, Joel R.; Brinkmann, J.

    2014-01-01

    We study the three-dimensional genus topology of large-scale structure using the northern region of the CMASS Data Release 10 (DR10) sample of the SDSS-III Baryon Oscillation Spectroscopic Survey. We select galaxies with redshift 0.452 < z < 0.625 and with a stellar mass M stellar > 10 11.56 M ☉ . We study the topology at two smoothing lengths: R G = 21 h –1 Mpc and R G = 34 h –1 Mpc. The genus topology studied at the R G = 21 h –1 Mpc scale results in the highest genus amplitude observed to date. The CMASS sample yields a genus curve that is characteristic of one produced by Gaussian random phase initial conditions. The data thus support the standard model of inflation where random quantum fluctuations in the early universe produced Gaussian random phase initial conditions. Modest deviations in the observed genus from random phase are as expected from shot noise effects and the nonlinear evolution of structure. We suggest the use of a fitting formula motivated by perturbation theory to characterize the shift and asymmetries in the observed genus curve with a single parameter. We construct 54 mock SDSS CMASS surveys along the past light cone from the Horizon Run 3 (HR3) N-body simulations, where gravitationally bound dark matter subhalos are identified as the sites of galaxy formation. We study the genus topology of the HR3 mock surveys with the same geometry and sampling density as the observational sample and find the observed genus topology to be consistent with ΛCDM as simulated by the HR3 mock samples. We conclude that the topology of the large-scale structure in the SDSS CMASS sample is consistent with cosmological models having primordial Gaussian density fluctuations growing in accordance with general relativity to form galaxies in massive dark matter halos.

  13. Informational and emotional elements in online support groups: a Bayesian approach to large-scale content analysis.

    Science.gov (United States)

    Deetjen, Ulrike; Powell, John A

    2016-05-01

    This research examines the extent to which informational and emotional elements are employed in online support forums for 14 purposively sampled chronic medical conditions and the factors that influence whether posts are of a more informational or emotional nature. Large-scale qualitative data were obtained from Dailystrength.org. Based on a hand-coded training dataset, all posts were classified into informational or emotional using a Bayesian classification algorithm to generalize the findings. Posts that could not be classified with a probability of at least 75% were excluded. The overall tendency toward emotional posts differs by condition: mental health (depression, schizophrenia) and Alzheimer's disease consist of more emotional posts, while informational posts relate more to nonterminal physical conditions (irritable bowel syndrome, diabetes, asthma). There is no gender difference across conditions, although prostate cancer forums are oriented toward informational support, whereas breast cancer forums rather feature emotional support. Across diseases, the best predictors for emotional content are lower age and a higher number of overall posts by the support group member. The results are in line with previous empirical research and unify empirical findings from single/2-condition research. Limitations include the analytical restriction to predefined categories (informational, emotional) through the chosen machine-learning approach. Our findings provide an empirical foundation for building theory on informational versus emotional support across conditions, give insights for practitioners to better understand the role of online support groups for different patients, and show the usefulness of machine-learning approaches to analyze large-scale qualitative health data from online settings. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Comparison and Evaluation of Large-Scale and On-Site Recycling Systems for Food Waste via Life Cycle Cost Analysis

    Directory of Open Access Journals (Sweden)

    Kyoung Hee Lee

    2017-11-01

    Full Text Available The purpose of this study was to evaluate the cost-benefit of on-site food waste recycling system using Life-Cycle Cost analysis, and to compare with large-scale treatment system. For accurate evaluation, the cost-benefit analysis was conducted with respect to local governments and residents, and qualitative environmental improvement effects were quantified. As for the local governments, analysis results showed that, when large-scale treatment system was replaced with on-site recycling system, there was significant cost reduction from the initial stage depending on reduction of investment, maintenance, and food wastewater treatment costs. As for the residents, it was found that the cost incurred from using the on-site recycling system was larger than the cost of using large-scale treatment system due to the cost of producing and installing the on-site treatment facilities at the initial stage. However, analysis showed that with continuous benefits such as greenhouse gas emission reduction, compost utilization, and food wastewater reduction, cost reduction would be obtained after 6 years of operating the on-site recycling system. Therefore, it was recommended for local governments and residents to consider introducing an on-site food waste recycling system if they are to replace an old treatment system or need to establish a new one.

  15. Large-scale use of mosquito larval source management for malaria control in Africa: a cost analysis.

    Science.gov (United States)

    Worrall, Eve; Fillinger, Ulrike

    2011-11-08

    At present, large-scale use of two malaria vector control methods, long-lasting insecticidal nets (LLINs) and indoor residual spraying (IRS) is being scaled up in Africa with substantial funding from donors. A third vector control method, larval source management (LSM), has been historically very successful and is today widely used for mosquito control globally, except in Africa. With increasing risk of insecticide resistance and a shift to more exophilic vectors, LSM is now under re-evaluation for use against afro-tropical vector species. Here the costs of this intervention were evaluated. The 'ingredients approach' was used to estimate the economic and financial costs per person protected per year (pppy) for large-scale LSM using microbial larvicides in three ecologically diverse settings: (1) the coastal metropolitan area of Dar es Salaam in Tanzania, (2) a highly populated Kenyan highland area (Vihiga District), and (3) a lakeside setting in rural western Kenya (Mbita Division). Two scenarios were examined to investigate the cost implications of using alternative product formulations. Sensitivity analyses on product prices were carried out. The results show that for programmes using the same granular formulation larviciding costs the least pppy in Dar es Salaam (US$0.94), approximately 60% more in Vihiga District (US$1.50) and the most in Mbita Division (US$2.50). However, these costs are reduced substantially if an alternative water-dispensable formulation is used; in Vihiga, this would reduce costs to US$0.79 and, in Mbita Division, to US$1.94. Larvicide and staff salary costs each accounted for approximately a third of the total economic costs per year. The cost pppy depends mainly on: (1) the type of formulation required for treating different aquatic habitats, (2) the human population density relative to the density of aquatic habitats and (3) the potential to target the intervention in space and/or time. Costs for LSM compare favourably with costs for IRS

  16. Large-scale use of mosquito larval source management for malaria control in Africa: a cost analysis

    Science.gov (United States)

    2011-01-01

    Background At present, large-scale use of two malaria vector control methods, long-lasting insecticidal nets (LLINs) and indoor residual spraying (IRS) is being scaled up in Africa with substantial funding from donors. A third vector control method, larval source management (LSM), has been historically very successful and is today widely used for mosquito control globally, except in Africa. With increasing risk of insecticide resistance and a shift to more exophilic vectors, LSM is now under re-evaluation for use against afro-tropical vector species. Here the costs of this intervention were evaluated. Methods The 'ingredients approach' was used to estimate the economic and financial costs per person protected per year (pppy) for large-scale LSM using microbial larvicides in three ecologically diverse settings: (1) the coastal metropolitan area of Dar es Salaam in Tanzania, (2) a highly populated Kenyan highland area (Vihiga District), and (3) a lakeside setting in rural western Kenya (Mbita Division). Two scenarios were examined to investigate the cost implications of using alternative product formulations. Sensitivity analyses on product prices were carried out. Results The results show that for programmes using the same granular formulation larviciding costs the least pppy in Dar es Salaam (US$0.94), approximately 60% more in Vihiga District (US$1.50) and the most in Mbita Division (US$2.50). However, these costs are reduced substantially if an alternative water-dispensable formulation is used; in Vihiga, this would reduce costs to US$0.79 and, in Mbita Division, to US$1.94. Larvicide and staff salary costs each accounted for approximately a third of the total economic costs per year. The cost pppy depends mainly on: (1) the type of formulation required for treating different aquatic habitats, (2) the human population density relative to the density of aquatic habitats and (3) the potential to target the intervention in space and/or time. Conclusion Costs for LSM

  17. An Ensemble Three-Dimensional Constrained Variational Analysis Method to Derive Large-Scale Forcing Data for Single-Column Models

    Science.gov (United States)

    Tang, Shuaiqi

    Atmospheric vertical velocities and advective tendencies are essential as large-scale forcing data to drive single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulations (LES). They cannot be directly measured or easily calculated with great accuracy from field measurements. In the Atmospheric Radiation Measurement (ARM) program, a constrained variational algorithm (1DCVA) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). We extend the 1DCVA algorithm into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data. We also introduce an ensemble framework using different background data, error covariance matrices and constraint variables to quantify the uncertainties of the large-scale forcing data. The results of sensitivity study show that the derived forcing data and SCM simulated clouds are more sensitive to the background data than to the error covariance matrices and constraint variables, while horizontal moisture advection has relatively large sensitivities to the precipitation, the dominate constraint variable. Using a mid-latitude cyclone case study in March 3rd, 2000 at the ARM Southern Great Plains (SGP) site, we investigate the spatial distribution of diabatic heating sources (Q1) and moisture sinks (Q2), and show that they are consistent with the satellite clouds and intuitive structure of the mid-latitude cyclone. We also evaluate the Q1 and Q2 in analysis/reanalysis, finding that the regional analysis/reanalysis all tend to underestimate the sub-grid scale upward transport of moist static energy in the lower troposphere. With the uncertainties from large-scale forcing data and observation specified, we compare SCM results and observations and find that models have large biases on cloud properties which could not be fully explained by the uncertainty from the large-scale forcing

  18. Creation of the dam for the No. 2 Kambaratinskaya HPP by large-scale blasting: analysis of planning experience and lessons learned

    International Nuclear Information System (INIS)

    Shuifer, M. I.; Argal, É. S.

    2012-01-01

    Results of complex instrument observations and video taping during large-scale blasts detonated for creation of the dam at the No. 2 Kambaratinskaya HPP on the Naryn River in the Kyrgyz Republic are analyzed. Tests of the energy effectiveness of the explosives are evaluated, characteristics of LSB manifestations in seismic and air waves are revealed, and the shaping and movement of the rock mass are examined. A methodological analysis of the planning and production of the LSB is given.

  19. NPP planning based on analysis of ground vibration caused by collapse of large-scale cooling towers

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Feng; Ji, Hongkui [Department of Structural Engineering, Tongji University, No. 1239 Siping Road, Shanghai 200092 (China); Gu, Xianglin, E-mail: gxl@tongji.edu.cn [Department of Structural Engineering, Tongji University, No. 1239 Siping Road, Shanghai 200092 (China); Li, Yi [Department of Structural Engineering, Tongji University, No. 1239 Siping Road, Shanghai 200092 (China); Wang, Mingreng; Lin, Tao [East China Electric Power Design Institute Co., Ltd, No. 409 Wuning Road, Shanghai 200063 (China)

    2015-12-15

    Highlights: • New recommendations for NPP planning were addressed taking into account collapse-induced ground vibration. • Critical factors influencing the collapse-induced ground vibration were investigated. • Comprehensive approach was presented to describe the initiation and propagation of collapse-induced disaster. - Abstract: Ground vibration induced by collapse of large-scale cooling towers can detrimentally influence the safe operation of adjacent nuclear-related facilities. To prevent and mitigate these hazards, new planning methods for nuclear power plants (NPPs) were studied considering the influence of these hazards. First, a “cooling tower-soil” model was developed, verified, and used as a numerical means to investigate ground vibration. Afterwards, five critical factors influencing collapse-induced ground vibration were analyzed in-depth. These influencing factors included the height and weight of the towers, accidental loads, soil properties, overlying soil, and isolation trench. Finally, recommendations relating to the control and mitigation of collapse-induced ground vibration in NPP planning were proposed, which addressed five issues, i.e., appropriate spacing between a cooling tower and the nuclear island, control of collapse modes, sitting of a cooling tower and the nuclear island, application of vibration reduction techniques, and the influence of tower collapse on surroundings.

  20. The potential for reducing atmospheric carbon by large-scale afforestation in China and related cost/benefit analysis

    International Nuclear Information System (INIS)

    Deying Xu

    1995-01-01

    In this paper, the amount of carbon sequestered through large-scale afforestation and related costs and benefits are calculated, assuming that the forests are managed in perpetual rotations. Based on land availability for afforestation, 20 cases are identified in five suitable regions in China. The least expensive way of developing forests for the purpose of sequestering carbon emissions is the case of Pinus massoniana from the initial investment point of view, and then Spruce. The cases of open forest management are relatively less expensive options because of their low initial investment and long rotations, although their annual wood increments are low. Some less productive tree species have higher net costs for carbon sequestering. For most of the agroforestry systems the net costs are low, especially in the south, the southwest, and the north of China, though their initial investments are high. If the total land available is afforested, the net carbon sequestering will be about 9.7 billion tons under perpetual rotations, amounting to 16.3 times the total industrial carbon release in 1988 in China, and the total initial cost for such a programme is estimated at 19.3 billion US$. Some hindrances in developing forests in China are discussed. (Author)

  1. Design and analysis of drum lathe for manufacturing large-scale optical microstructured surface and load characteristics of aerostatic spindle

    Science.gov (United States)

    Wu, Dongxu; Qiao, Zheng; Wang, Bo; Wang, Huiming; Li, Guo

    2014-08-01

    In this paper, a four-axis ultra-precision lathe for machining large-scale drum mould with microstructured surface is presented. Firstly, because of the large dimension and weight of drum workpiece, as well as high requirement of machining accuracy, the design guidelines and component parts of this drum lathe is introduced in detail, including control system, moving and driving components, position feedback system and so on. Additionally, the weight of drum workpiece would result in the structural deformation of this lathe, therefore, this paper analyses the effect of structural deformation on machining accuracy by means of ANSYS. The position change is approximately 16.9nm in the X-direction(sensitive direction) which could be negligible. Finally, in order to study the impact of bearing parameters on the load characteristics of aerostatic journal bearing, one of the famous computational fluid dynamics(CFD) software, FLUENT, is adopted, and a series of simulations are carried out. The result shows that the aerostatic spindle has superior performance of carrying capacity and stiffness, it is possible for this lathe to bear the weight of drum workpiece up to 1000kg since there are two aerostatic spindles in the headstock and tailstock.

  2. [Evaluation and source analysis of the mercury pollution in soils and vegetables around a large-scale zinc smelting plant].

    Science.gov (United States)

    Liu, Fang; Wang, Shu-Xiao; Wu, Qing-Ru; Lin, Hai

    2013-02-01

    The farming soil and vegetable samples around a large-scale zinc smelter were collected for mercury content analyses, and the single pollution index method with relevant regulations was used to evaluate the pollution status of sampled soils and vegetables. The results indicated that the surface soil and vegetables were polluted with mercury to different extent. Of the soil samples, 78% exceeded the national standard. The mercury concentration in the most severely contaminated area was 29 times higher than the background concentration, reaching the severe pollution degree. The mercury concentration in all vegetable samples exceeded the standard of non-pollution vegetables. Mercury concentration, in the most severely polluted vegetables were 64.5 times of the standard, and averagely the mercury concentration in the vegetable samples was 25.4 times of the standard. For 85% of the vegetable samples, the mercury concentration, of leaves were significantly higher than that of roots, which implies that the mercury in leaves mainly came from the atmosphere. The mercury concentrations in vegetable roots were significantly correlated with that in soils, indicating the mercury in roots was mainly from soil. The mercury emissions from the zinc smelter have obvious impacts on the surrounding soils and vegetables. Key words:zinc smelting; mercury pollution; soil; vegetable; mercury content

  3. Multielement analysis of human hair and kidney stones by instrumental neutron activation analysis with the k0-standardization method

    International Nuclear Information System (INIS)

    Abugassa, I.; Sarmani, S.B.; Samat, S.B.

    1999-01-01

    This paper focuses on the evaluation of the k 0 method of instrumental neutron activation analysis in biological materials. The method has been applied in multielement analysis of human hair standard reference materials from IAEA, No. 085, No. 086 and from NIES (National Institute for Environmental Sciences) No. 5. Hair samples from people resident in different parts of Malaysia, in addition to a sample from Japan, were analyzed. In addition, human kidney stones from members of the Malaysian population have been analyzed for minor and trace elements. More than 25 elements have been determined. The samples were irradiated in the rotary rack (Lazy Susan) at the TRIGA Mark II reactor of the Malaysian Institute for Nuclear Technology and Research (MINT). The accuracy of the method was ascertained by analysis of other reference materials, including 1573 tomato leaves and 1572 citrus leaves. In this method the deviation of the 1/E 1+α epithermal neutron flux distribution from the 1/E law (P/T ratio) for true coincidence effects of the γ-ray cascade and the HPGe detector efficiency were determined and corrected for

  4. Simultaneous multi-element determination in different seed samples of Dodonaea viscosa hopseed using instrumental neutron activation analysis

    International Nuclear Information System (INIS)

    El-Sweify, Fatma H.; El-Amir, Mahmoud A.; Mostafa, Mohamed; Ramadan, Hala E.; Rashad, Ghada M.

    2016-01-01

    Instrumental neutron activation analysis technique (INAA) was applied for nondestructive multi-element analysis of seed samples of the plant Dodonaea viscosa hopseed. This plant is distributed all over Egypt, because of its suitable properties. The samples were collected from some bushes grown at different sites in some governorates, in July of each year during the period from 2004 to 2011. The determined elements are: Co, Cs, Eu, Fe, Hg, Ni, Rb, Sc, Se, Sr and Zn, under the chosen irradiation and cooling times. The content of some elements has been compared with data obtained from previous work on analysis of various kinds of seeds. The influence of some parameters on the determined elemental content is discussed. Standard reference materials IAEA-155 and IAEA-V-10 were used to assure quality control, accuracy and precision of the technique.

  5. Simultaneous multi-element determination in different seed samples of Dodonaea viscosa hopseed using instrumental neutron activation analysis

    Energy Technology Data Exchange (ETDEWEB)

    El-Sweify, Fatma H.; El-Amir, Mahmoud A.; Mostafa, Mohamed; Ramadan, Hala E.; Rashad, Ghada M. [Atomic Energy Authority, Cairo (Egypt). Hot Lab. Center

    2016-07-01

    Instrumental neutron activation analysis technique (INAA) was applied for nondestructive multi-element analysis of seed samples of the plant Dodonaea viscosa hopseed. This plant is distributed all over Egypt, because of its suitable properties. The samples were collected from some bushes grown at different sites in some governorates, in July of each year during the period from 2004 to 2011. The determined elements are: Co, Cs, Eu, Fe, Hg, Ni, Rb, Sc, Se, Sr and Zn, under the chosen irradiation and cooling times. The content of some elements has been compared with data obtained from previous work on analysis of various kinds of seeds. The influence of some parameters on the determined elemental content is discussed. Standard reference materials IAEA-155 and IAEA-V-10 were used to assure quality control, accuracy and precision of the technique.

  6. The state of OA: a large-scale analysis of the prevalence and impact of Open Access articles.

    Science.gov (United States)

    Piwowar, Heather; Priem, Jason; Larivière, Vincent; Alperin, Juan Pablo; Matthias, Lisa; Norlander, Bree; Farley, Ashley; West, Jevin; Haustein, Stefanie

    2018-01-01

    Despite growing interest in Open Access (OA) to scholarly literature, there is an unmet need for large-scale, up-to-date, and reproducible studies assessing the prevalence and characteristics of OA. We address this need using oaDOI, an open online service that determines OA status for 67 million articles. We use three samples, each of 100,000 articles, to investigate OA in three populations: (1) all journal articles assigned a Crossref DOI, (2) recent journal articles indexed in Web of Science, and (3) articles viewed by users of Unpaywall, an open-source browser extension that lets users find OA articles using oaDOI. We estimate that at least 28% of the scholarly literature is OA (19M in total) and that this proportion is growing, driven particularly by growth in Gold and Hybrid. The most recent year analyzed (2015) also has the highest percentage of OA (45%). Because of this growth, and the fact that readers disproportionately access newer articles, we find that Unpaywall users encounter OA quite frequently: 47% of articles they view are OA. Notably, the most common mechanism for OA is not Gold, Green, or Hybrid OA, but rather an under-discussed category we dub Bronze: articles made free-to-read on the publisher website, without an explicit Open license. We also examine the citation impact of OA articles, corroborating the so-called open-access citation advantage: accounting for age and discipline, OA articles receive 18% more citations than average, an effect driven primarily by Green and Hybrid OA. We encourage further research using the free oaDOI service, as a way to inform OA policy and practice.

  7. The state of OA: a large-scale analysis of the prevalence and impact of Open Access articles

    Directory of Open Access Journals (Sweden)

    Heather Piwowar

    2018-02-01

    Full Text Available Despite growing interest in Open Access (OA to scholarly literature, there is an unmet need for large-scale, up-to-date, and reproducible studies assessing the prevalence and characteristics of OA. We address this need using oaDOI, an open online service that determines OA status for 67 million articles. We use three samples, each of 100,000 articles, to investigate OA in three populations: (1 all journal articles assigned a Crossref DOI, (2 recent journal articles indexed in Web of Science, and (3 articles viewed by users of Unpaywall, an open-source browser extension that lets users find OA articles using oaDOI. We estimate that at least 28% of the scholarly literature is OA (19M in total and that this proportion is growing, driven particularly by growth in Gold and Hybrid. The most recent year analyzed (2015 also has the highest percentage of OA (45%. Because of this growth, and the fact that readers disproportionately access newer articles, we find that Unpaywall users encounter OA quite frequently: 47% of articles they view are OA. Notably, the most common mechanism for OA is not Gold, Green, or Hybrid OA, but rather an under-discussed category we dub Bronze: articles made free-to-read on the publisher website, without an explicit Open license. We also examine the citation impact of OA articles, corroborating the so-called open-access citation advantage: accounting for age and discipline, OA articles receive 18% more citations than average, an effect driven primarily by Green and Hybrid OA. We encourage further research using the free oaDOI service, as a way to inform OA policy and practice.

  8. A Novel Large-scale Mentoring Program for Medical Students based on a Quantitative and Qualitative Needs Analysis

    Science.gov (United States)

    von der Borch, Philip; Dimitriadis, Konstantinos; Störmann, Sylvère; Meinel, Felix G.; Moder, Stefan; Reincke, Martin; Tekian, Ara; Fischer, Martin R.

    2011-01-01

    Purpose: Mentoring plays an important role in students' performance and career. The authors of this study assessed the need for mentoring among medical students and established a novel large-scale mentoring program at Ludwig-Maximilians-University (LMU) Munich School of Medicine. Methods: Needs assessment was conducted using a survey distributed to all students at the medical school (n=578 of 4,109 students, return rate 14.1%). In addition, the authors held focus groups with selected medical students (n=24) and faculty physicians (n=22). All students signing up for the individual mentoring completed a survey addressing their expectations (n=534). Results: Needs assessment revealed that 83% of medical students expressed overall satisfaction with the teaching at LMU. In contrast, only 36.5% were satisfied with how the faculty supports their individual professional development and 86% of students voiced a desire for more personal and professional support. When asked to define the role of a mentor, 55.6% "very much" wanted their mentors to act as counselors, arrange contacts for them (36.4%), and provide ideas for professional development (28.1%). Topics that future mentees "very much" wished to discuss included research (56.6%), final year electives (55.8%) and experiences abroad (45.5%). Conclusions: Based on the strong desire for mentoring among medical students, the authors developed a novel two-tiered system that introduces one-to-one mentoring for students in their clinical years and offers society-based peer mentoring for pre-clinical students. One year after launching the program, more than 300 clinical students had experienced one-to-one mentoring and 1,503 students and physicians were involved in peer mentoring societies. PMID:21818236

  9. Diffusion Experiments with Opalinus and Callovo-Oxfordian Clays: Laboratory, Large-Scale Experiments and Microscale Analysis by RBS

    International Nuclear Information System (INIS)

    Garcia-Gutierrez, M.; Alonso, U.; Missana, T.; Cormenzana, J.L.; Mingarro, M.; Morejon, J.; Gil, P.

    2009-01-01

    Consolidated clays are potential host rocks for deep geological repositories for high-level radioactive waste. Diffusion is the main transport process for radionuclides (RN) in these clays. Radionuclide (RN) diffusion coefficients are the most important parameters for Performance Assessment (PA) calculations of clay barriers. Different diffusion methodologies were applied at a laboratory scale to analyse the diffusion behaviour of a wide range of RN. Main aims were to understand the transport properties of different RNs in two different clays and to contribute with feasible methodologies to improve in-situ diffusion experiments, using samples of larger scale. Classical laboratory essays and a novel experimental set-up for large-scale diffusion experiments were performed, together to a novel application of the nuclear ion beam technique Rutherford Backscattering Spectrometry (RBS), for diffusion analyses at the micrometer scale. The main experimental and theoretical characteristics of the different methodologies, and their advantages and limitations are here discussed. Experiments were performed with the Opalinus and the Callovo-Oxfordian clays. Both clays are studied as potential host rock for a repository. Effective diffusion coefficients ranged between 1.10 - 10 to 1.10 - 12 m 2 /s for neutral, low sorbing cations (as Na and Sr) and anions. Apparent diffusion coefficients for strongly sorbing elements, as Cs and Co, are in the order of 1.10-13 m 2 /s; europium present the lowest diffusion coefficient (5.10 - 15 m 2 /s). The results obtained by the different approaches gave a comprehensive database of diffusion coefficients for RN with different transport behaviour within both clays. (Author) 42 refs

  10. Diffusion Experiments with Opalinus and Callovo-Oxfordian Clays: Laboratory, Large-Scale Experiments and Microscale Analysis by RBS

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Gutierrez, M.; Alonso, U.; Missana, T.; Cormenzana, J.L.; Mingarro, M.; Morejon, J.; Gil, P.

    2009-09-25

    Consolidated clays are potential host rocks for deep geological repositories for high-level radioactive waste. Diffusion is the main transport process for radionuclides (RN) in these clays. Radionuclide (RN) diffusion coefficients are the most important parameters for Performance Assessment (PA) calculations of clay barriers. Different diffusion methodologies were applied at a laboratory scale to analyse the diffusion behaviour of a wide range of RN. Main aims were to understand the transport properties of different RNs in two different clays and to contribute with feasible methodologies to improve in-situ diffusion experiments, using samples of larger scale. Classical laboratory essays and a novel experimental set-up for large-scale diffusion experiments were performed, together to a novel application of the nuclear ion beam technique Rutherford Backscattering Spectrometry (RBS), for diffusion analyses at the micrometer scale. The main experimental and theoretical characteristics of the different methodologies, and their advantages and limitations are here discussed. Experiments were performed with the Opalinus and the Callovo-Oxfordian clays. Both clays are studied as potential host rock for a repository. Effective diffusion coefficients ranged between 1.10{sup -}10 to 1.10{sup -}12 m{sup 2}/s for neutral, low sorbing cations (as Na and Sr) and anions. Apparent diffusion coefficients for strongly sorbing elements, as Cs and Co, are in the order of 1.10-13 m{sup 2}/s; europium present the lowest diffusion coefficient (5.10{sup -}15 m{sup 2}/s). The results obtained by the different approaches gave a comprehensive database of diffusion coefficients for RN with different transport behaviour within both clays. (Author) 42 refs.

  11. Feasibility analysis of using inverse modeling for estimating natural groundwater recharge from a large-scale soil moisture monitoring network

    Science.gov (United States)

    Wang, Tiejun; Franz, Trenton E.; Yue, Weifeng; Szilagyi, Jozsef; Zlotnik, Vitaly A.; You, Jinsheng; Chen, Xunhong; Shulski, Martha D.; Young, Aaron

    2016-02-01

    Despite the importance of groundwater recharge (GR), its accurate estimation still remains one of the most challenging tasks in the field of hydrology. In this study, with the help of inverse modeling, long-term (6 years) soil moisture data at 34 sites from the Automated Weather Data Network (AWDN) were used to estimate the spatial distribution of GR across Nebraska, USA, where significant spatial variability exists in soil properties and precipitation (P). To ensure the generality of this study and its potential broad applications, data from public domains and literature were used to parameterize the standard Hydrus-1D model. Although observed soil moisture differed significantly across the AWDN sites mainly due to the variations in P and soil properties, the simulations were able to capture the dynamics of observed soil moisture under different climatic and soil conditions. The inferred mean annual GR from the calibrated models varied over three orders of magnitude across the study area. To assess the uncertainties of the approach, estimates of GR and actual evapotranspiration (ETa) from the calibrated models were compared to the GR and ETa obtained from other techniques in the study area (e.g., remote sensing, tracers, and regional water balance). Comparison clearly demonstrated the feasibility of inverse modeling and large-scale (>104 km2) soil moisture monitoring networks for estimating GR. In addition, the model results were used to further examine the impacts of climate and soil on GR. The data showed that both P and soil properties had significant impacts on GR in the study area with coarser soils generating higher GR; however, different relationships between GR and P emerged at the AWDN sites, defined by local climatic and soil conditions. In general, positive correlations existed between annual GR and P for the sites with coarser-textured soils or under wetter climatic conditions. With the rapidly expanding soil moisture monitoring networks around the

  12. Simultaneous multielement analysis of zirconium alloys by chlorination separation of matrix/ICP-AES

    International Nuclear Information System (INIS)

    Kato, Kaneharu

    1990-01-01

    An analytical method combined chlorination separation of matrix with ICP-AES has been developed for reactor grade Zr alloys (Zircaloy-2). A sample (1 g) is taken into a Pt boat and chlorinated with HCl gas of 100 ml/min in a glass reaction tube at ca. 330degC. Matrix Zr of the sample is volatilized and separated as ZrCl 4 . The analytic elements remaining quantitatively as chlorination residue are dissolved in a mixture of mineral acids (6 M HCl 3 ml+conc. HNO 3 0.5 ml+conc. H 2 SO 4 0.2 ml) and diluted to 20 ml with distilled water after filtration. ICP-AES was used for simultaneous multielement determination using a calibration curve method. The present method has the following advantages: simple sample preparation procedure; applicability to any form of samples to determine multielements; simple ICP-AES calibration procedure. This method was successfully applied to the determination of Fe, Ni, Cu, Co, Mn and Pb in the Zr alloys of JAERI CRM's and NBS SRM's. (author)

  13. Performance analysis of a large-scale helium Brayton cryo-refrigerator with static gas bearing turboexpander

    International Nuclear Information System (INIS)

    Zhang, Yu; Li, Qiang; Wu, Jihao; Li, Qing; Lu, Wenhai; Xiong, Lianyou; Liu, Liqiang; Xu, Xiangdong; Sun, Lijia; Sun, Yu; Xie, Xiujuan; Wang, Bingming; Qiu, Yinan; Zhang, Peng

    2015-01-01

    Highlights: • A 2 kW at 20.0 K helium Brayton cryo-refrigerator is built in China. • A series of tests have been systematically conducted to investigate the performance of the cryo-refrigerator. • Maximum heat conductance proportion (90.7%) appears in the heat exchangers of cold box rather than those of heat reservoirs. • A model of helium Brayton cryo-refrigerator/cycle is presented according to finite-time thermodynamics. - Abstract: Large-scale helium cryo-refrigerator is widely used in superconducting systems, nuclear fusion engineering, and scientific researches, etc., however, its energy efficiency is quite low. First, a 2 kW at 20.0 K helium Brayton cryo-refrigerator is built, and a series of tests have been systematically conducted to investigate the performance of the cryo-refrigerator. It is found that maximum heat conductance proportion (90.7%) appears in the heat exchangers of cold box rather than those of heat reservoirs, which is the main characteristic of the helium Brayton cryo-refrigerator/cycle different from the air Brayton refrigerator/cycle. Other three characteristics also lie in the configuration of refrigerant helium bypass, internal purifier and non-linearity of specific heat of helium. Second, a model of helium Brayton cryo-refrigerator/cycle is presented according to finite-time thermodynamics. The assumption named internal purification temperature depth (PTD) is introduced, and the heat capacity rate of whole cycle is divided into three different regions in accordance with the PTD: room temperature region, upper internal purification temperature region and lower one. Analytical expressions of cooling capacity and COP are obtained, and we found that the expressions are piecewise functions. Further, comparison between the model and the experimental results for cooling capacity of the helium cryo-refrigerator shows that error is less than 7.6%. The PTD not only helps to achieve the analytical formulae and indicates the working

  14. Large-scale solar purchasing

    International Nuclear Information System (INIS)

    1999-01-01

    The principal objective of the project was to participate in the definition of a new IEA task concerning solar procurement (''the Task'') and to assess whether involvement in the task would be in the interest of the UK active solar heating industry. The project also aimed to assess the importance of large scale solar purchasing to UK active solar heating market development and to evaluate the level of interest in large scale solar purchasing amongst potential large scale purchasers (in particular housing associations and housing developers). A further aim of the project was to consider means of stimulating large scale active solar heating purchasing activity within the UK. (author)

  15. Multi-Element Analysis and Geochemical Spatial Trends of Groundwater in Rural Northern New York

    Directory of Open Access Journals (Sweden)

    Michael O’Connor

    2010-05-01

    Full Text Available Samples from private wells (n = 169 throughout St. Lawrence County, NY were analyzed by ICP-MS multi-element techniques. St. Lawrence County spans three diverse bedrock terranes including Precambrian crystalline rocks of the Adirondack Lowlands (mostly paragneisses and Highlands (mostly orthogneisses, as well as Paleozoic sedimentary rocks of the St. Lawrence Valley. An ArcGIS database was constructed and used to generate contour plots for elements across the county. Strontium isotopes and unique geochemical signatures were used to distinguish water from various geologic units. The results were consistent with a large (7,309 km2, sparsely populated (~110,000, rural region with diverse bedrock and glacial cover.

  16. GAT: a graph-theoretical analysis toolbox for analyzing between-group differences in large-scale structural and functional brain networks.

    Science.gov (United States)

    Hosseini, S M Hadi; Hoeft, Fumiko; Kesler, Shelli R

    2012-01-01

    In recent years, graph theoretical analyses of neuroimaging data have increased our understanding of the organization of large-scale structural and functional brain networks. However, tools for pipeline application of graph theory for analyzing topology of brain networks is still lacking. In this report, we describe the development of a graph-analysis toolbox (GAT) that facilitates analysis and comparison of structural and functional network brain networks. GAT provides a graphical user interface (GUI) that facilitates construction and analysis of brain networks, comparison of regional and global topological properties between networks, analysis of network hub and modules, and analysis of resilience of the networks to random failure and targeted attacks. Area under a curve (AUC) and functional data analyses (FDA), in conjunction with permutation testing, is employed for testing the differences in network topologies; analyses that are less sensitive to the thresholding process. We demonstrated the capabilities of GAT by investigating the differences in the organization of regional gray-matter correlation networks in survivors of acute lymphoblastic leukemia (ALL) and healthy matched Controls (CON). The results revealed an alteration in small-world characteristics of the brain networks in the ALL survivors; an observation that confirm our hypothesis suggesting widespread neurobiological injury in ALL survivors. Along with demonstration of the capabilities of the GAT, this is the first report of altered large-scale structural brain networks in ALL survivors.

  17. GAT: a graph-theoretical analysis toolbox for analyzing between-group differences in large-scale structural and functional brain networks.

    Directory of Open Access Journals (Sweden)

    S M Hadi Hosseini

    Full Text Available In recent years, graph theoretical analyses of neuroimaging data have increased our understanding of the organization of large-scale structural and functional brain networks. However, tools for pipeline application of graph theory for analyzing topology of brain networks is still lacking. In this report, we describe the development of a graph-analysis toolbox (GAT that facilitates analysis and comparison of structural and functional network brain networks. GAT provides a graphical user interface (GUI that facilitates construction and analysis of brain networks, comparison of regional and global topological properties between networks, analysis of network hub and modules, and analysis of resilience of the networks to random failure and targeted attacks. Area under a curve (AUC and functional data analyses (FDA, in conjunction with permutation testing, is employed for testing the differences in network topologies; analyses that are less sensitive to the thresholding process. We demonstrated the capabilities of GAT by investigating the differences in the organization of regional gray-matter correlation networks in survivors of acute lymphoblastic leukemia (ALL and healthy matched Controls (CON. The results revealed an alteration in small-world characteristics of the brain networks in the ALL survivors; an observation that confirm our hypothesis suggesting widespread neurobiological injury in ALL survivors. Along with demonstration of the capabilities of the GAT, this is the first report of altered large-scale structural brain networks in ALL survivors.

  18. Large-Scale Network Analysis of Whole-Brain Resting-State Functional Connectivity in Spinal Cord Injury: A Comparative Study.

    Science.gov (United States)

    Kaushal, Mayank; Oni-Orisan, Akinwunmi; Chen, Gang; Li, Wenjun; Leschke, Jack; Ward, Doug; Kalinosky, Benjamin; Budde, Matthew; Schmit, Brian; Li, Shi-Jiang; Muqeet, Vaishnavi; Kurpad, Shekar

    2017-09-01

    Network analysis based on graph theory depicts the brain as a complex network that allows inspection of overall brain connectivity pattern and calculation of quantifiable network metrics. To date, large-scale network analysis has not been applied to resting-state functional networks in complete spinal cord injury (SCI) patients. To characterize modular reorganization of whole brain into constituent nodes and compare network metrics between SCI and control subjects, fifteen subjects with chronic complete cervical SCI and 15 neurologically intact controls were scanned. The data were preprocessed followed by parcellation of the brain into 116 regions of interest (ROI). Correlation analysis was performed between every ROI pair to construct connectivity matrices and ROIs were categorized into distinct modules. Subsequently, local efficiency (LE) and global efficiency (GE) network metrics were calculated at incremental cost thresholds. The application of a modularity algorithm organized the whole-brain resting-state functional network of the SCI and the control subjects into nine and seven modules, respectively. The individual modules differed across groups in terms of the number and the composition of constituent nodes. LE demonstrated statistically significant decrease at multiple cost levels in SCI subjects. GE did not differ significantly between the two groups. The demonstration of modular architecture in both groups highlights the applicability of large-scale network analysis in studying complex brain networks. Comparing modules across groups revealed differences in number and membership of constituent nodes, indicating modular reorganization due to neural plasticity.

  19. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  20. The application of two-step linear temperature program to thermal analysis for monitoring the lipid induction of Nostoc sp. KNUA003 in large scale cultivation.

    Science.gov (United States)

    Kang, Bongmun; Yoon, Ho-Sung

    2015-02-01

    Recently, microalgae was considered as a renewable energy for fuel production because its production is nonseasonal and may take place on nonarable land. Despite all of these advantages, microalgal oil production is significantly affected by environmental factors. Furthermore, the large variability remains an important problem in measurement of algae productivity and compositional analysis, especially, the total lipid content. Thus, there is considerable interest in accurate determination of total lipid content during the biotechnological process. For these reason, various high-throughput technologies were suggested for accurate measurement of total lipids contained in the microorganisms, especially oleaginous microalgae. In addition, more advanced technologies were employed to quantify the total lipids of the microalgae without a pretreatment. However, these methods are difficult to measure total lipid content in wet form microalgae obtained from large-scale production. In present study, the thermal analysis performed with two-step linear temeperature program was applied to measure heat evolved in temperature range from 310 to 351 °C of Nostoc sp. KNUA003 obtained from large-scale cultivation. And then, we examined the relationship between the heat evolved in 310-351 °C (HE) and total lipid content of the wet Nostoc cell cultivated in raceway. As a result, the linear relationship was determined between HE value and total lipid content of Nostoc sp. KNUA003. Particularly, there was a linear relationship of 98% between the HE value and the total lipid content of the tested microorganism. Based on this relationship, the total lipid content converted from the heat evolved of wet Nostoc sp. KNUA003 could be used for monitoring its lipid induction in large-scale cultivation. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. A Development of Nonstationary Regional Frequency Analysis Model with Large-scale Climate Information: Its Application to Korean Watershed

    Science.gov (United States)

    Kim, Jin-Young; Kwon, Hyun-Han; Kim, Hung-Soo

    2015-04-01

    The existing regional frequency analysis has disadvantages in that it is difficult to consider geographical characteristics in estimating areal rainfall. In this regard, this study aims to develop a hierarchical Bayesian model based nonstationary regional frequency analysis in that spatial patterns of the design rainfall with geographical information (e.g. latitude, longitude and altitude) are explicitly incorporated. This study assumes that the parameters of Gumbel (or GEV distribution) are a function of geographical characteristics within a general linear regression framework. Posterior distribution of the regression parameters are estimated by Bayesian Markov Chain Monte Carlo (MCMC) method, and the identified functional relationship is used to spatially interpolate the parameters of the distributions by using digital elevation models (DEM) as inputs. The proposed model is applied to derive design rainfalls over the entire Han-river watershed. It was found that the proposed Bayesian regional frequency analysis model showed similar results compared to L-moment based regional frequency analysis. In addition, the model showed an advantage in terms of quantifying uncertainty of the design rainfall and estimating the area rainfall considering geographical information. Finally, comprehensive discussion on design rainfall in the context of nonstationary will be presented. KEYWORDS: Regional frequency analysis, Nonstationary, Spatial information, Bayesian Acknowledgement This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  2. On the use of Cloud Computing and Machine Learning for Large-Scale SAR Science Data Processing and Quality Assessment Analysi

    Science.gov (United States)

    Hua, H.

    2016-12-01

    Geodetic imaging is revolutionizing geophysics, but the scope of discovery has been limited by labor-intensive technological implementation of the analyses. The Advanced Rapid Imaging and Analysis (ARIA) project has proven capability to automate SAR data processing and analysis. Existing and upcoming SAR missions such as Sentinel-1A/B and NISAR are also expected to generate massive amounts of SAR data. This has brought to the forefront the need for analytical tools for SAR quality assessment (QA) on the large volumes of SAR data-a critical step before higher-level time series and velocity products can be reliably generated. Initially leveraging an advanced hybrid-cloud computing science data system for performing large-scale processing, machine learning approaches were augmented for automated analysis of various quality metrics. Machine learning-based user-training of features, cross-validation, prediction models were integrated into our cloud-based science data processing flow to enable large-scale and high-throughput QA analytics for enabling improvements to the production quality of geodetic data products.

  3. Finding Tropical Cyclones on a Cloud Computing Cluster: Using Parallel Virtualization for Large-Scale Climate Simulation Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hasenkamp, Daren; Sim, Alexander; Wehner, Michael; Wu, Kesheng

    2010-09-30

    Extensive computing power has been used to tackle issues such as climate changes, fusion energy, and other pressing scientific challenges. These computations produce a tremendous amount of data; however, many of the data analysis programs currently only run a single processor. In this work, we explore the possibility of using the emerging cloud computing platform to parallelize such sequential data analysis tasks. As a proof of concept, we wrap a program for analyzing trends of tropical cyclones in a set of virtual machines (VMs). This approach allows the user to keep their familiar data analysis environment in the VMs, while we provide the coordination and data transfer services to ensure the necessary input and output are directed to the desired locations. This work extensively exercises the networking capability of the cloud computing systems and has revealed a number of weaknesses in the current cloud system software. In our tests, we are able to scale the parallel data analysis job to a modest number of VMs and achieve a speedup that is comparable to running the same analysis task using MPI. However, compared to MPI based parallelization, the cloud-based approach has a number of advantages. The cloud-based approach is more flexible because the VMs can capture arbitrary software dependencies without requiring the user to rewrite their programs. The cloud-based approach is also more resilient to failure; as long as a single VM is running, it can make progress while as soon as one MPI node fails the whole analysis job fails. In short, this initial work demonstrates that a cloud computing system is a viable platform for distributed scientific data analyses traditionally conducted on dedicated supercomputing systems.

  4. Finding Tropical Cyclones on a Cloud Computing Cluster: Using Parallel Virtualization for Large-Scale Climate Simulation Analysis

    International Nuclear Information System (INIS)

    Hasenkamp, Daren; Sim, Alexander; Wehner, Michael; Wu, Kesheng

    2010-01-01

    Extensive computing power has been used to tackle issues such as climate changes, fusion energy, and other pressing scientific challenges. These computations produce a tremendous amount of data; however, many of the data analysis programs currently only run a single processor. In this work, we explore the possibility of using the emerging cloud computing platform to parallelize such sequential data analysis tasks. As a proof of concept, we wrap a program for analyzing trends of tropical cyclones in a set of virtual machines (VMs). This approach allows the user to keep their familiar data analysis environment in the VMs, while we provide the coordination and data transfer services to ensure the necessary input and output are directed to the desired locations. This work extensively exercises the networking capability of the cloud computing systems and has revealed a number of weaknesses in the current cloud system software. In our tests, we are able to scale the parallel data analysis job to a modest number of VMs and achieve a speedup that is comparable to running the same analysis task using MPI. However, compared to MPI based parallelization, the cloud-based approach has a number of advantages. The cloud-based approach is more flexible because the VMs can capture arbitrary software dependencies without requiring the user to rewrite their programs. The cloud-based approach is also more resilient to failure; as long as a single VM is running, it can make progress while as soon as one MPI node fails the whole analysis job fails. In short, this initial work demonstrates that a cloud computing system is a viable platform for distributed scientific data analyses traditionally conducted on dedicated supercomputing systems.

  5. Large-scale meta-analysis of genome-wide association data identifies six new risk loci for Parkinson's disease

    NARCIS (Netherlands)

    Nalls, Mike A.; Pankratz, Nathan; Lill, Christina M.; Do, Chuong B.; Hernandez, Dena G.; Saad, Mohamad; DeStefano, Anita L.; Kara, Eleanna; Bras, Jose; Sharma, Manu; Schulte, Claudia; Keller, Margaux F.; Arepalli, Sampath; Letson, Christopher; Edsall, Connor; Stefansson, Hreinn; Liu, Xinmin; Pliner, Hannah; Lee, Joseph H.; Cheng, Rong; Ikram, M. Arfan; Ioannidis, John P. A.; Hadjigeorgiou, Georgios M.; Bis, Joshua C.; Martinez, Maria; Perlmutter, Joel S.; Goate, Alison; Marder, Karen; Fiske, Brian; Sutherland, Margaret; Xiromerisiou, Georgia; Myers, Richard H.; Clark, Lorraine N.; Stefansson, Kari; Hardy, John A.; Heutink, Peter; Chen, Honglei; Wood, Nicholas W.; Houlden, Henry; Payami, Haydeh; Brice, Alexis; Scott, William K.; Gasser, Thomas; Bertram, Lars; Eriksson, Nicholas; Foroud, Tatiana; Singleton, Andrew B.; Plagnol, Vincent; Sheerin, Una-Marie; Simón-Sánchez, Javier; Lesage, Suzanne; Sveinbjörnsdóttir, Sigurlaug; Barker, Roger; Ben-Shlomo, Yoav; Berendse, Henk W.; Berg, Daniela; Bhatia, Kailash; de Bie, Rob M. A.; Biffi, Alessandro; Bloem, Bas; Bochdanovits, Zoltan; Bonin, Michael; Bras, Jose M.; Brockmann, Kathrin; Brooks, Janet; Burn, David J.; Charlesworth, Gavin; Chinnery, Patrick F.; Chong, Sean; Clarke, Carl E.; Cookson, Mark R.; Cooper, J. Mark; Corvol, Jean Christophe; Counsell, Carl; Damier, Philippe; Dartigues, Jean-François; Deloukas, Panos; Deuschl, Günther; Dexter, David T.; van Dijk, Karin D.; Dillman, Allissa; Durif, Frank; Dürr, Alexandra; Edkins, Sarah; Evans, Jonathan R.; Foltynie, Thomas; Dong, Jing; Gardner, Michelle; Gibbs, J. Raphael; Gray, Emma; Guerreiro, Rita; Harris, Clare; van Hilten, Jacobus J.; Hofman, Albert; Hollenbeck, Albert; Holton, Janice; Hu, Michele; Huang, Xuemei; Wurster, Isabel; Mätzler, Walter; Hudson, Gavin; Hunt, Sarah E.; Huttenlocher, Johanna; Illig, Thomas; Jónsson, Pálmi V.; Lambert, Jean-Charles; Langford, Cordelia; Lees, Andrew; Lichtner, Peter; Limousin, Patricia; Lopez, Grisel; Lorenz, Delia; McNeill, Alisdair; Moorby, Catriona; Moore, Matthew; Morris, Huw R.; Morrison, Karen E.; Mudanohwo, Ese; O'Sullivan, Sean S.; Pearson, Justin; Pétursson, Hjörvar; Pollak, Pierre; Post, Bart; Potter, Simon; Ravina, Bernard; Revesz, Tamas; Riess, Olaf; Rivadeneira, Fernando; Rizzu, Patrizia; Ryten, Mina; Sawcer, Stephen; Schapira, Anthony; Scheffer, Hans; Shaw, Karen; Shoulson, Ira; Sidransky, Ellen; Smith, Colin; Spencer, Chris C. A.; Stefánsson, Hreinn; Bettella, Francesco; Stockton, Joanna D.; Strange, Amy; Talbot, Kevin; Tanner, Carlie M.; Tashakkori-Ghanbaria, Avazeh; Tison, François; Trabzuni, Daniah; Traynor, Bryan J.; Uitterlinden, André G.; Velseboer, Daan; Vidailhet, Marie; Walker, Robert; van de Warrenburg, Bart; Wickremaratchi, Mirdhu; Williams, Nigel; Williams-Gray, Caroline H.; Winder-Rhodes, Sophie; Stefánsson, Kári; Hardy, John; Factor, S.; Higgins, D.; Evans, S.; Shill, H.; Stacy, M.; Danielson, J.; Marlor, L.; Williamson, K.; Jankovic, J.; Hunter, C.; Simon, D.; Ryan, P.; Scollins, L.; Saunders-Pullman, R.; Boyar, K.; Costan-Toth, C.; Ohmann, E.; Sudarsky, L.; Joubert, C.; Friedman, J.; Chou, K.; Fernandez, H.; Lannon, M.; Galvez-Jimenez, N.; Podichetty, A.; Thompson, K.; Lewitt, P.; Deangelis, M.; O'Brien, C.; Seeberger, L.; Dingmann, C.; Judd, D.; Marder, K.; Fraser, J.; Harris, J.; Bertoni, J.; Peterson, C.; Rezak, M.; Medalle, G.; Chouinard, S.; Panisset, M.; Hall, J.; Poiffaut, H.; Calabrese, V.; Roberge, P.; Wojcieszek, J.; Belden, J.; Jennings, D.; Marek, K.; Mendick, S.; Reich, S.; Dunlop, B.; Jog, M.; Horn, C.; Uitti, R.; Turk, M.; Ajax, T.; Mannetter, J.; Sethi, K.; Carpenter, J.; Dill, B.; Hatch, L.; Ligon, K.; Narayan, S.; Blindauer, K.; Abou-Samra, K.; Petit, J.; Elmer, L.; Aiken, E.; Davis, K.; Schell, C.; Wilson, S.; Velickovic, M.; Koller, W.; Phipps, S.; Feigin, A.; Gordon, M.; Hamann, J.; Licari, E.; Marotta-Kollarus, M.; Shannon, B.; Winnick, R.; Simuni, T.; Videnovic, A.; Kaczmarek, A.; Williams, K.; Wolff, M.; Rao, J.; Cook, M.; Fernandez, M.; Kostyk, S.; Hubble, J.; Campbell, A.; Reider, C.; Seward, A.; Camicioli, R.; Carter, J.; Nutt, J.; Andrews, P.; Morehouse, S.; Stone, C.; Mendis, T.; Grimes, D.; Alcorn-Costa, C.; Gray, P.; Haas, K.; Vendette, J.; Sutton, J.; Hutchinson, B.; Young, J.; Rajput, A.; Klassen, L.; Shirley, T.; Manyam, B.; Simpson, P.; Whetteckey, J.; Wulbrecht, B.; Truong, D.; Pathak, M.; Frei, K.; Luong, N.; Tra, T.; Tran, A.; Vo, J.; Lang, A.; Kleiner- Fisman, G.; Nieves, A.; Johnston, L.; So, J.; Podskalny, G.; Giffin, L.; Atchison, P.; Allen, C.; Martin, W.; Wieler, M.; Suchowersky, O.; Furtado, S.; Klimek, M.; Hermanowicz, N.; Niswonger, S.; Shults, C.; Fontaine, D.; Aminoff, M.; Christine, C.; Diminno, M.; Hevezi, J.; Dalvi, A.; Kang, U.; Richman, J.; Uy, S.; Sahay, A.; Gartner, M.; Schwieterman, D.; Hall, D.; Leehey, M.; Culver, S.; Derian, T.; Demarcaida, T.; Thurlow, S.; Rodnitzky, R.; Dobson, J.; Lyons, K.; Pahwa, R.; Gales, T.; Thomas, S.; Shulman, L.; Weiner, W.; Dustin, K.; Singer, C.; Zelaya, L.; Tuite, P.; Hagen, V.; Rolandelli, S.; Schacherer, R.; Kosowicz, J.; Gordon, P.; Werner, J.; Serrano, C.; Roque, S.; Kurlan, R.; Berry, D.; Gardiner, I.; Hauser, R.; Sanchez-Ramos, J.; Zesiewicz, T.; Delgado, H.; Price, K.; Rodriguez, P.; Wolfrath, S.; Pfeiffer, R.; Davis, L.; Pfeiffer, B.; Dewey, R.; Hayward, B.; Johnson, A.; Meacham, M.; Estes, B.; Walker, F.; Hunt, V.; O'Neill, C.; Racette, B.; Swisher, L.; Dijamco, Cheri; Conley, Emily Drabant; Dorfman, Elizabeth; Tung, Joyce Y.; Hinds, David A.; Mountain, Joanna L.; Wojcicki, Anne; Lew, M.; Klein, C.; Golbe, L.; Growdon, J.; Wooten, G. F.; Watts, R.; Guttman, M.; Goldwurm, S.; Saint-Hilaire, M. H.; Baker, K.; Litvan, I.; Nicholson, G.; Nance, M.; Drasby, E.; Isaacson, S.; Burn, D.; Pramstaller, P.; Al-hinti, J.; Moller, A.; Sherman, S.; Roxburgh, R.; Slevin, J.; Perlmutter, J.; Mark, M. H.; Huggins, N.; Pezzoli, G.; Massood, T.; Itin, I.; Corbett, A.; Chinnery, P.; Ostergaard, K.; Snow, B.; Cambi, F.; Kay, D.; Samii, A.; Agarwal, P.; Roberts, J. W.; Higgins, D. S.; Molho, Eric; Rosen, Ami; Montimurro, J.; Martinez, E.; Griffith, A.; Kusel, V.; Yearout, D.; Zabetian, C.; Clark, L. N.; Liu, X.; Lee, J. H.; Taub, R. Cheng; Louis, E. D.; Cote, L. J.; Waters, C.; Ford, B.; Fahn, S.; Vance, Jeffery M.; Beecham, Gary W.; Martin, Eden R.; Nuytemans, Karen; Pericak-Vance, Margaret A.; Haines, Jonathan L.; DeStefano, Anita; Seshadri, Sudha; Choi, Seung Hoan; Frank, Samuel; Psaty, Bruce M.; Rice, Kenneth; Longstreth, W. T.; Ton, Thanh G. N.; Jain, Samay; van Duijn, Cornelia M.; Verlinden, Vincent J.; Koudstaal, Peter J.; Singleton, Andrew; Cookson, Mark; Hernandez, Dena; Nalls, Michael; Zonderman, Alan; Ferrucci, Luigi; Johnson, Robert; Longo, Dan; O'Brien, Richard; Traynor, Bryan; Troncoso, Juan; van der Brug, Marcel; Zielke, Ronald; Weale, Michael; Ramasamy, Adaikalavan; Dardiotis, Efthimios; Tsimourtou, Vana; Spanaki, Cleanthe; Plaitakis, Andreas; Bozi, Maria; Stefanis, Leonidas; Vassilatis, Dimitris; Koutsis, Georgios; Panas, Marios; Lunnon, Katie; Lupton, Michelle; Powell, John; Parkkinen, Laura; Ansorge, Olaf

    2014-01-01

    We conducted a meta-analysis of Parkinson's disease genome-wide association studies using a common set of 7,893,274 variants across 13,708 cases and 95,282 controls. Twenty-six loci were identified as having genome-wide significant association; these and 6 additional previously reported loci were

  6. Analysis Methods for Extracting Knowledge from Large-Scale WiFi Monitoring to Inform Building Facility Planning

    DEFF Research Database (Denmark)

    Ruiz-Ruiz, Antonio; Blunck, Henrik; Prentow, Thor Siiger

    2014-01-01

    realistic data to inform facility planning. In this paper, we propose analysis methods to extract knowledge from large sets of network collected WiFi traces to better inform facility management and planning in large building complexes. The analysis methods, which build on a rich set of temporal and spatial......The optimization of logistics in large building com- plexes with many resources, such as hospitals, require realistic facility management and planning. Current planning practices rely foremost on manual observations or coarse unverified as- sumptions and therefore do not properly scale or provide....... Spatio-temporal visualization tools built on top of these methods enable planners to inspect and explore extracted information to inform facility-planning activities. To evaluate the methods, we present results for a large hospital complex covering more than 10 hectares. The evaluation is based on Wi...

  7. Analytical methods for large-scale sensitivity analysis using GRESS [GRadient Enhanced Software System] and ADGEN [Automated Adjoint Generator

    International Nuclear Information System (INIS)

    Pin, F.G.

    1988-04-01

    Sensitivity analysis is an established methodology used by researchers in almost every field to gain essential insight in design and modeling studies and in performance assessments of complex systems. Conventional sensitivity analysis methodologies, however, have not enjoyed the widespread use they deserve considering the wealth of information they can provide, partly because of their prohibitive cost or the large initial analytical investment they require. Automated systems have recently been developed at ORNL to eliminate these drawbacks. Compilers such as GRESS and ADGEN now allow automatic and cost effective calculation of sensitivities in FORTRAN computer codes. In this paper, these and other related tools are described and their impact and applicability in the general areas of modeling, performance assessment and decision making for radioactive waste isolation problems are discussed. 7 refs., 2 figs

  8. Assessing chlorinated ethene degradation in a large scale contaminant plume by dual carbon–chlorine isotope analysis and quantitative PCR

    DEFF Research Database (Denmark)

    Hunkeler, D.; Abe, Y.; Broholm, Mette Martina

    2011-01-01

    The fate of chlorinated ethenes in a large contaminant plume originating from a tetrachloroethene (PCE) source in a sandy aquifer in Denmark was investigated using novel methods including compound-specific carbon and chlorine isotope analysis and quantitative real-time polymerase chain reaction (q...... reduction by pyrite as indicated by the formation of cDCE and stable carbon isotope data. TCE and cDCE showed carbon isotope trends typical for reductive dechlorination with an initial depletion of 13C in the daughter products followed by an enrichment of 13C as degradation proceeded. At 1000 m downgradient......DCE. The significant enrichment of 13C in VC indicates that VC was transformed further, although the mechanismcould not be determined. The transformation of cDCEwas the rate limiting step as no accumulation of VC occurred. In summary, the study demonstrates that carbon–chlorine isotope analysis and qPCR combinedwith...

  9. Laminar and dorsoventral molecular organization of the medial entorhinal cortex revealed by large-scale anatomical analysis of gene expression.

    Directory of Open Access Journals (Sweden)

    Helen L Ramsden

    2015-01-01

    Full Text Available Neural circuits in the medial entorhinal cortex (MEC encode an animal's position and orientation in space. Within the MEC spatial representations, including grid and directional firing fields, have a laminar and dorsoventral organization that corresponds to a similar topography of neuronal connectivity and cellular properties. Yet, in part due to the challenges of integrating anatomical data at the resolution of cortical layers and borders, we know little about the molecular components underlying this organization. To address this we develop a new computational pipeline for high-throughput analysis and comparison of in situ hybridization (ISH images at laminar resolution. We apply this pipeline to ISH data for over 16,000 genes in the Allen Brain Atlas and validate our analysis with RNA sequencing of MEC tissue from adult mice. We find that differential gene expression delineates the borders of the MEC with neighboring brain structures and reveals its laminar and dorsoventral organization. We propose a new molecular basis for distinguishing the deep layers of the MEC and show that their similarity to corresponding layers of neocortex is greater than that of superficial layers. Our analysis identifies ion channel-, cell adhesion- and synapse-related genes as candidates for functional differentiation of MEC layers and for encoding of spatial information at different scales along the dorsoventral axis of the MEC. We also reveal laminar organization of genes related to disease pathology and suggest that a high metabolic demand predisposes layer II to neurodegenerative pathology. In principle, our computational pipeline can be applied to high-throughput analysis of many forms of neuroanatomical data. Our results support the hypothesis that differences in gene expression contribute to functional specialization of superficial layers of the MEC and dorsoventral organization of the scale of spatial representations.

  10. Large-scale association analysis provides insights into the genetic architecture and pathophysiology of type 2 diabetes

    DEFF Research Database (Denmark)

    Morris, Andrew P; Voight, Benjamin F; Teslovich, Tanya M

    2012-01-01

    To extend understanding of the genetic architecture and molecular basis of type 2 diabetes (T2D), we conducted a meta-analysis of genetic variants on the Metabochip, including 34,840 cases and 114,981 controls, overwhelmingly of European descent. We identified ten previously unreported T2D...... processes, including CREBBP-related transcription, adipocytokine signaling and cell cycle regulation, in diabetes pathogenesis....

  11. A Large-Scale Analysis of Genetic Variants within Putative miRNA Binding Sites in Prostate Cancer

    DEFF Research Database (Denmark)

    Stegeman, Shane; Amankwah, Ernest; Klein, Kerenaftali

    2015-01-01

    UNLABELLED: Prostate cancer is the second most common malignancy among men worldwide. Genome-wide association studies have identified 100 risk variants for prostate cancer, which can explain approximately 33% of the familial risk of the disease. We hypothesized that a comprehensive analysis of ge...... currently accepted statistical levels of genome-wide significance. Studies of miRNAs and their interactions with SNPs could provide further insights into the mechanisms of prostate cancer risk....

  12. Analysis, Design and Implementation of a Proof-of-Concept Prototype to Support Large-Scale Military Experimentation

    Science.gov (United States)

    2013-09-01

    Result Analysis In this phase, users and analysts check all the results per objective- question. Then, they consolidate all these results to form...the CRUD technique. By using both the CRUD and the user goal techniques, we identified all the use cases the iFRE system must perform. Table 3...corresponding Focus Area or Critical Operation Issue to simplify the user tasks, and exempts the user from remembering the identifying codes/numbers of

  13. Robust and rapid algorithms facilitate large-scale whole genome sequencing downstream analysis in an integrative framework.

    Science.gov (United States)

    Li, Miaoxin; Li, Jiang; Li, Mulin Jun; Pan, Zhicheng; Hsu, Jacob Shujui; Liu, Dajiang J; Zhan, Xiaowei; Wang, Junwen; Song, Youqiang; Sham, Pak Chung

    2017-05-19

    Whole genome sequencing (WGS) is a promising strategy to unravel variants or genes responsible for human diseases and traits. However, there is a lack of robust platforms for a comprehensive downstream analysis. In the present study, we first proposed three novel algorithms, sequence gap-filled gene feature annotation, bit-block encoded genotypes and sectional fast access to text lines to address three fundamental problems. The three algorithms then formed the infrastructure of a robust parallel computing framework, KGGSeq, for integrating downstream analysis functions for whole genome sequencing data. KGGSeq has been equipped with a comprehensive set of analysis functions for quality control, filtration, annotation, pathogenic prediction and statistical tests. In the tests with whole genome sequencing data from 1000 Genomes Project, KGGSeq annotated several thousand more reliable non-synonymous variants than other widely used tools (e.g. ANNOVAR and SNPEff). It took only around half an hour on a small server with 10 CPUs to access genotypes of ∼60 million variants of 2504 subjects, while a popular alternative tool required around one day. KGGSeq's bit-block genotype format used 1.5% or less space to flexibly represent phased or unphased genotypes with multiple alleles and achieved a speed of over 1000 times faster to calculate genotypic correlation. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. Large scale comparative codon-pair context analysis unveils general rules that fine-tune evolution of mRNA primary structure.

    Directory of Open Access Journals (Sweden)

    Gabriela Moura

    Full Text Available BACKGROUND: Codon usage and codon-pair context are important gene primary structure features that influence mRNA decoding fidelity. In order to identify general rules that shape codon-pair context and minimize mRNA decoding error, we have carried out a large scale comparative codon-pair context analysis of 119 fully sequenced genomes. METHODOLOGIES/PRINCIPAL FINDINGS: We have developed mathematical and software tools for large scale comparative codon-pair context analysis. These methodologies unveiled general and species specific codon-pair context rules that govern evolution of mRNAs in the 3 domains of life. We show that evolution of bacterial and archeal mRNA primary structure is mainly dependent on constraints imposed by the translational machinery, while in eukaryotes DNA methylation and tri-nucleotide repeats impose strong biases on codon-pair context. CONCLUSIONS: The data highlight fundamental differences between prokaryotic and eukaryotic mRNA decoding rules, which are partially independent of codon usage.

  15. Large-scale experiments for the vulnerability analysis of buildings impacted and intruded by fluviatile torrential hazard processes

    Science.gov (United States)

    Sturm, Michael; Gems, Bernhard; Fuchs, Sven; Mazzorana, Bruno; Papathoma-Köhle, Maria; Aufleger, Markus

    2016-04-01

    In European mountain regions, losses due to torrential hazards are still considerable high despite the ongoing debate on an overall increasing or decreasing trend. Recent events in Austria severely revealed that due to technical and economic reasons, an overall protection of settlements in the alpine environment against torrential hazards is not feasible. On the side of the hazard process, events with unpredictable intensities may represent overload scenarios for existent protection structures in the torrent catchments. They bear a particular risk of significant losses in the living space. Although the importance of vulnerability is widely recognised, there is still a research gap concerning its assessment. Currently, potential losses at buildings due to torrential hazards and their comparison with reinstatement costs are determined by the use of empirical functions. Hence, relations of process intensities and the extent of losses, gathered by the analysis of historic hazard events and the information of object-specific restoration values, are used. This approach does not represent a physics-based and integral concept since relevant and often crucial processes, as the intrusion of the fluid-sediment-mixture into elements at risk, are not considered. Based on these findings, our work is targeted at extending these findings and models of present risk research in the context of an integral, more physics-based vulnerability analysis concept. Fluviatile torrential hazard processes and their impacts on the building envelope are experimentally modelled. Material intrusion processes are thereby explicitly considered. Dynamic impacts are gathered quantitatively and spatially distributed by the use of a large set of force transducers. The experimental tests are accomplished with artificial, vertical and skewed plates, including also openings for material intrusion. Further, the impacts on specific buildings within the test site of the work, the fan apex of the Schnannerbach

  16. A large-scale analysis of tissue-specific pathology and gene expression of human disease genes and complexes

    DEFF Research Database (Denmark)

    Hansen, Kasper Lage; Hansen, Niclas Tue; Karlberg, Erik, Olof, Linnart

    2008-01-01

    to be overexpressed in the normal tissues where defects cause pathology. In contrast, cancer genes and complexes were not overexpressed in the tissues from which the tumors emanate. We specifically identified a complex involved in XY sex reversal that is testis-specific and down-regulated in ovaries. We also......Heritable diseases are caused by germ-line mutations that, despite tissuewide presence, often lead to tissue-specific pathology. Here, we make a systematic analysis of the link between tissue-specific gene expression and pathological manifestations in many human diseases and cancers. Diseases were...

  17. SOS2 and ACP1 Loci Identified through Large-Scale Exome Chip Analysis Regulate Kidney Development and Function

    OpenAIRE

    Li, Man; Li, Yong; Weeks, Olivia; Mijatovic, Vladan; Teumer, Alexander; Huffman, Jennifer E; Tromp, Gerard; Fuchsberger, Christian; Gorski, Mathias; Lyytikäinen, Leo-Pekka; Nutile, Teresa; Sedaghat, Sanaz; Sorice, Rossella; Tin, Adrienne; Yang, Qiong

    2017-01-01

    Genome-wide association studies have identified >50 common variants associated with kidney function, but these variants do not fully explain the variation in eGFR. We performed a two-stage meta-analysis of associations between genotypes from the Illumina exome array and eGFR on the basis of serum creatinine (eGFRcrea) among participants of European ancestry from the CKDGen Consortium (nStage1: 111,666; nStage2: 48,343). In single-variant analyses, we identified single nucleotide polymorphi...

  18. Porous structure analysis of large-scale randomly packed pebble bed in high temperature gas-cooled reactor

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Cheng; Yang, Xingtuan; Liu, Zhiyong; Sun, Yanfei; Jiang, Shengyao [Tsinghua Univ., Beijing (China). Key Laboratory of Advanced Reactor Engineering and Safety; Li, Congxin [Ministry of Environmental Protection of the People' s Republic of China, Beijing (China). Nuclear and Radiation Safety Center

    2015-02-15

    A three-dimensional pebble bed corresponding to the randomly packed bed in the heat transfer test facility built for the High Temperature Reactor Pebble bed Modules (HTR-PM) in Shandong Shidaowan is simulated via discrete element method. Based on the simulation, we make a detailed analysis on the packing structure of the pebble bed from several aspects, such as transverse section image, longitudinal section image, radial and axial porosity distributions, two-dimensional porosity distribution and coordination number distribution. The calculation results show that radial distribution of porosity is uniform in the center and oscillates near the wall; axial distribution of porosity oscillates near the bottom and linearly varies along height due to effect of gravity; the average coordination number is about seven and equals to the maximum coordination number frequency. The fully established three-dimensional packing structure analysis of the pebble bed in this work is of fundamental significance to understand the flow and heat transfer characteristics throughout the pebble-bed type structure.

  19. GEnomes Management Application (GEM.app): a new software tool for large-scale collaborative genome analysis.

    Science.gov (United States)

    Gonzalez, Michael A; Lebrigio, Rafael F Acosta; Van Booven, Derek; Ulloa, Rick H; Powell, Eric; Speziani, Fiorella; Tekin, Mustafa; Schüle, Rebecca; Züchner, Stephan

    2013-06-01

    Novel genes are now identified at a rapid pace for many Mendelian disorders, and increasingly, for genetically complex phenotypes. However, new challenges have also become evident: (1) effectively managing larger exome and/or genome datasets, especially for smaller labs; (2) direct hands-on analysis and contextual interpretation of variant data in large genomic datasets; and (3) many small and medium-sized clinical and research-based investigative teams around the world are generating data that, if combined and shared, will significantly increase the opportunities for the entire community to identify new genes. To address these challenges, we have developed GEnomes Management Application (GEM.app), a software tool to annotate, manage, visualize, and analyze large genomic datasets (https://genomics.med.miami.edu/). GEM.app currently contains ∼1,600 whole exomes from 50 different phenotypes studied by 40 principal investigators from 15 different countries. The focus of GEM.app is on user-friendly analysis for nonbioinformaticians to make next-generation sequencing data directly accessible. Yet, GEM.app provides powerful and flexible filter options, including single family filtering, across family/phenotype queries, nested filtering, and evaluation of segregation in families. In addition, the system is fast, obtaining results within 4 sec across ∼1,200 exomes. We believe that this system will further enhance identification of genetic causes of human disease. © 2013 Wiley Periodicals, Inc.

  20. Large-scale association analysis identifies new lung cancer susceptibility loci and heterogeneity in genetic susceptibility across histological subtypes

    Science.gov (United States)

    McKay, James D.; Hung, Rayjean J.; Han, Younghun; Zong, Xuchen; Carreras-Torres, Robert; Christiani, David C.; Caporaso, Neil E.; Johansson, Mattias; Xiao, Xiangjun; Li, Yafang; Byun, Jinyoung; Dunning, Alison; Pooley, Karen A.; Qian, David C.; Ji, Xuemei; Liu, Geoffrey; Timofeeva, Maria N.; Bojesen, Stig E.; Wu, Xifeng; Le Marchand, Loic; Albanes, Demetrios; Bickeböller, Heike; Aldrich, Melinda C.; Bush, William S.; Tardon, Adonina; Rennert, Gad; Teare, M. Dawn; Field, John K.; Kiemeney, Lambertus A.; Lazarus, Philip; Haugen, Aage; Lam, Stephen; Schabath, Matthew B.; Andrew, Angeline S.; Shen, Hongbing; Hong, Yun-Chul; Yuan, Jian-Min; Bertazzi, Pier Alberto; Pesatori, Angela C.; Ye, Yuanqing; Diao, Nancy; Su, Li; Zhang, Ruyang; Brhane, Yonathan; Leighl, Natasha; Johansen, Jakob S.; Mellemgaard, Anders; Saliba, Walid; Haiman, Christopher A.; Wilkens, Lynne R.; Fernandez-Somoano, Ana; Fernandez-Tardon, Guillermo; van der Heijden, Henricus F.M.; Kim, Jin Hee; Dai, Juncheng; Hu, Zhibin; Davies, Michael PA; Marcus, Michael W.; Brunnström, Hans; Manjer, Jonas; Melander, Olle; Muller, David C.; Overvad, Kim; Trichopoulou, Antonia; Tumino, Rosario; Doherty, Jennifer A.; Barnett, Matt P.; Chen, Chu; Goodman, Gary E.; Cox, Angela; Taylor, Fiona; Woll, Penella; Brüske, Irene; Wichmann, H.-Erich; Manz, Judith; Muley, Thomas R.; Risch, Angela; Rosenberger, Albert; Grankvist, Kjell; Johansson, Mikael; Shepherd, Frances A.; Tsao, Ming-Sound; Arnold, Susanne M.; Haura, Eric B.; Bolca, Ciprian; Holcatova, Ivana; Janout, Vladimir; Kontic, Milica; Lissowska, Jolanta; Mukeria, Anush; Ognjanovic, Simona; Orlowski, Tadeusz M.; Scelo, Ghislaine; Swiatkowska, Beata; Zaridze, David; Bakke, Per; Skaug, Vidar; Zienolddiny, Shanbeh; Duell, Eric J.; Butler, Lesley M.; Koh, Woon-Puay; Gao, Yu-Tang; Houlston, Richard S.; McLaughlin, John; Stevens, Victoria L.; Joubert, Philippe; Lamontagne, Maxime; Nickle, David C.; Obeidat, Ma’en; Timens, Wim; Zhu, Bin; Song, Lei; Kachuri, Linda; Artigas, María Soler; Tobin, Martin D.; Wain, Louise V.; Rafnar, Thorunn; Thorgeirsson, Thorgeir E.; Reginsson, Gunnar W.; Stefansson, Kari; Hancock, Dana B.; Bierut, Laura J.; Spitz, Margaret R.; Gaddis, Nathan C.; Lutz, Sharon M.; Gu, Fangyi; Johnson, Eric O.; Kamal, Ahsan; Pikielny, Claudio; Zhu, Dakai; Lindströem, Sara; Jiang, Xia; Tyndale, Rachel F.; Chenevix-Trench, Georgia; Beesley, Jonathan; Bossé, Yohan; Chanock, Stephen; Brennan, Paul; Landi, Maria Teresa; Amos, Christopher I.

    2017-01-01

    Summary While several lung cancer susceptibility loci have been identified, much of lung cancer heritability remains unexplained. Here, 14,803 cases and 12,262 controls of European descent were genotyped on the OncoArray and combined with existing data for an aggregated GWAS analysis of lung cancer on 29,266 patients and 56,450 controls. We identified 18 susceptibility loci achieving genome wide significance, including 10 novel loci. The novel loci highlighted the striking heterogeneity in genetic susceptibility across lung cancer histological subtypes, with four loci associated with lung cancer overall and six with lung adenocarcinoma. Gene expression quantitative trait analysis (eQTL) in 1,425 normal lung tissues highlighted RNASET2, SECISBP2L and NRG1 as candidate genes. Other loci include genes such as a cholinergic nicotinic receptor, CHRNA2, and the telomere-related genes, OFBC1 and RTEL1. Further exploration of the target genes will continue to provide new insights into the etiology of lung cancer. PMID:28604730

  1. Large-scale genome-wide association analysis of bipolar disorder identifies a new susceptibility locus near ODZ4.

    LENUS (Irish Health Repository)

    Sklar, Pamela

    2011-10-01

    We conducted a combined genome-wide association study (GWAS) of 7,481 individuals with bipolar disorder (cases) and 9,250 controls as part of the Psychiatric GWAS Consortium. Our replication study tested 34 SNPs in 4,496 independent cases with bipolar disorder and 42,422 independent controls and found that 18 of 34 SNPs had P < 0.05, with 31 of 34 SNPs having signals with the same direction of effect (P = 3.8 × 10(-7)). An analysis of all 11,974 bipolar disorder cases and 51,792 controls confirmed genome-wide significant evidence of association for CACNA1C and identified a new intronic variant in ODZ4. We identified a pathway comprised of subunits of calcium channels enriched in bipolar disorder association intervals. Finally, a combined GWAS analysis of schizophrenia and bipolar disorder yielded strong association evidence for SNPs in CACNA1C and in the region of NEK4-ITIH1-ITIH3-ITIH4. Our replication results imply that increasing sample sizes in bipolar disorder will confirm many additional loci.

  2. Large-scale association analysis identifies new lung cancer susceptibility loci and heterogeneity in genetic susceptibility across histological subtypes.

    Science.gov (United States)

    McKay, James D; Hung, Rayjean J; Han, Younghun; Zong, Xuchen; Carreras-Torres, Robert; Christiani, David C; Caporaso, Neil E; Johansson, Mattias; Xiao, Xiangjun; Li, Yafang; Byun, Jinyoung; Dunning, Alison; Pooley, Karen A; Qian, David C; Ji, Xuemei; Liu, Geoffrey; Timofeeva, Maria N; Bojesen, Stig E; Wu, Xifeng; Le Marchand, Loic; Albanes, Demetrios; Bickeböller, Heike; Aldrich, Melinda C; Bush, William S; Tardon, Adonina; Rennert, Gad; Teare, M Dawn; Field, John K; Kiemeney, Lambertus A; Lazarus, Philip; Haugen, Aage; Lam, Stephen; Schabath, Matthew B; Andrew, Angeline S; Shen, Hongbing; Hong, Yun-Chul; Yuan, Jian-Min; Bertazzi, Pier Alberto; Pesatori, Angela C; Ye, Yuanqing; Diao, Nancy; Su, Li; Zhang, Ruyang; Brhane, Yonathan; Leighl, Natasha; Johansen, Jakob S; Mellemgaard, Anders; Saliba, Walid; Haiman, Christopher A; Wilkens, Lynne R; Fernandez-Somoano, Ana; Fernandez-Tardon, Guillermo; van der Heijden, Henricus F M; Kim, Jin Hee; Dai, Juncheng; Hu, Zhibin; Davies, Michael P A; Marcus, Michael W; Brunnström, Hans; Manjer, Jonas; Melander, Olle; Muller, David C; Overvad, Kim; Trichopoulou, Antonia; Tumino, Rosario; Doherty, Jennifer A; Barnett, Matt P; Chen, Chu; Goodman, Gary E; Cox, Angela; Taylor, Fiona; Woll, Penella; Brüske, Irene; Wichmann, H-Erich; Manz, Judith; Muley, Thomas R; Risch, Angela; Rosenberger, Albert; Grankvist, Kjell; Johansson, Mikael; Shepherd, Frances A; Tsao, Ming-Sound; Arnold, Susanne M; Haura, Eric B; Bolca, Ciprian; Holcatova, Ivana; Janout, Vladimir; Kontic, Milica; Lissowska, Jolanta; Mukeria, Anush; Ognjanovic, Simona; Orlowski, Tadeusz M; Scelo, Ghislaine; Swiatkowska, Beata; Zaridze, David; Bakke, Per; Skaug, Vidar; Zienolddiny, Shanbeh; Duell, Eric J; Butler, Lesley M; Koh, Woon-Puay; Gao, Yu-Tang; Houlston, Richard S; McLaughlin, John; Stevens, Victoria L; Joubert, Philippe; Lamontagne, Maxime; Nickle, David C; Obeidat, Ma'en; Timens, Wim; Zhu, Bin; Song, Lei; Kachuri, Linda; Artigas, María Soler; Tobin, Martin D; Wain, Louise V; Rafnar, Thorunn; Thorgeirsson, Thorgeir E; Reginsson, Gunnar W; Stefansson, Kari; Hancock, Dana B; Bierut, Laura J; Spitz, Margaret R; Gaddis, Nathan C; Lutz, Sharon M; Gu, Fangyi; Johnson, Eric O; Kamal, Ahsan; Pikielny, Claudio; Zhu, Dakai; Lindströem, Sara; Jiang, Xia; Tyndale, Rachel F; Chenevix-Trench, Georgia; Beesley, Jonathan; Bossé, Yohan; Chanock, Stephen; Brennan, Paul; Landi, Maria Teresa; Amos, Christopher I

    2017-07-01

    Although several lung cancer susceptibility loci have been identified, much of the heritability for lung cancer remains unexplained. Here 14,803 cases and 12,262 controls of European descent were genotyped on the OncoArray and combined with existing data for an aggregated genome-wide association study (GWAS) analysis of lung cancer in 29,266 cases and 56,450 controls. We identified 18 susceptibility loci achieving genome-wide significance, including 10 new loci. The new loci highlight the striking heterogeneity in genetic susceptibility across the histological subtypes of lung cancer, with four loci associated with lung cancer overall and six loci associated with lung adenocarcinoma. Gene expression quantitative trait locus (eQTL) analysis in 1,425 normal lung tissue samples highlights RNASET2, SECISBP2L and NRG1 as candidate genes. Other loci include genes such as a cholinergic nicotinic receptor, CHRNA2, and the telomere-related genes OFBC1 and RTEL1. Further exploration of the target genes will continue to provide new insights into the etiology of lung cancer.

  3. Application of the multi-element analysis by X-fluorescence and neutron activation to the characterization of an archaeological site

    International Nuclear Information System (INIS)

    Rossini, I.

    1991-06-01

    The first part of this thesis is about possible analysis methods (XRF, PIXE, INAA, laser fluorimetry, and ICP), applied to Uranium, Thorium and Rubidium assays in archaeological clays and potteries. The best results have been obtained with Neutron Activation technics. The second part is about the multi-element analysis of quarries and about the research by statistical treatment of correlations between the element concentrations and the sampling sites (excavations, quarries)

  4. Approaches to learning as predictors of academic achievement: Results from a large scale, multi-level analysis

    DEFF Research Database (Denmark)

    Herrmann, Kim Jesper; McCune, Velda; Bager-Elsborg, Anna

    2017-01-01

    The relationships between university students’ academic achievement and their approaches to learning and studying continuously attract scholarly attention. We report the results of an analysis in which multilevel linear modelling was used to analyse data from 3,626 Danish university students....... Controlling for the effects of age, gender, and progression, we found that the students’ end-of-semester grade point averages were related negatively to a surface approach and positively to organised effort. Interestingly, the effect of the surface approach on academic achievement varied across programmes....... While there has been considerable interest in the ways in which academic programmes shape learning and teaching, the effects of these contexts on the relationship between approaches to learning and academic outcomes is under-researched. The results are discussed in relation to findings from recent meta...

  5. SOS2 and ACP1 Loci Identified through Large-Scale Exome Chip Analysis Regulate Kidney Development and Function

    DEFF Research Database (Denmark)

    Li, Man; Li, Yong; Weeks, Olivia

    2017-01-01

    Genome-wide association studies have identified >50 common variants associated with kidney function, but these variants do not fully explain the variation in eGFR. We performed a two-stage meta-analysis of associations between genotypes from the Illumina exome array and eGFR on the basis of serum...... creatinine (eGFRcrea) among participants of European ancestry from the CKDGen Consortium (nStage1: 111,666; nStage2: 48,343). In single-variant analyses, we identified single nucleotide polymorphisms at seven new loci associated with eGFRcrea (PPM1J, EDEM3, ACP1, SPEG, EYA4, CYP1A1, and ATXN2L; PStage1......associations of functional rare variants in three genes with eGFRcrea, including a novel association with the SOS Ras/Rho guanine nucleotide exchange factor 2 gene, SOS2 (P=5.4×10(-8) by sequence kernel...

  6. Understanding Turkish students' preferences for distance education depending on financial circumstances: A large-scale CHAID analysis

    Science.gov (United States)

    Firat, Mehmet

    2017-04-01

    In the past, distance education was used as a method to meet the educational needs of citizens with limited options to attend an institution of higher education. Nowadays, it has become irreplaceable in higher education thanks to developments in instructional technology. But the question of why students choose distance education is still important. The purpose of this study was to determine Turkish students' reasons for choosing distance education and to investigate how these reasons differ depending on their financial circumstances. The author used a Chi squared Automatic Interaction Detector (CHAID) analysis to determine 18,856 Turkish students' reasons for choosing distance education. Results of the research revealed that Turkish students chose distance education not because of geographical limitations, family-related problems or economic difficulties, but for such reasons as already being engaged in their profession, increasing their knowledge, and seeking promotion to a better position.

  7. Large-scale association analysis provides insights into the genetic architecture and pathophysiology of type 2 diabetes

    Science.gov (United States)

    Morris, Andrew P; Voight, Benjamin F; Teslovich, Tanya M; Ferreira, Teresa; Segrè, Ayellet V; Steinthorsdottir, Valgerdur; Strawbridge, Rona J; Khan, Hassan; Grallert, Harald; Mahajan, Anubha; Prokopenko, Inga; Kang, Hyun Min; Dina, Christian; Esko, Tonu; Fraser, Ross M; Kanoni, Stavroula; Kumar, Ashish; Lagou, Vasiliki; Langenberg, Claudia; Luan, Jian'an; Lindgren, Cecilia M; Müller-Nurasyid, Martina; Pechlivanis, Sonali; Rayner, N William; Scott, Laura J; Wiltshire, Steven; Yengo, Loic; Kinnunen, Leena; Rossin, Elizabeth J; Raychaudhuri, Soumya; Johnson, Andrew D; Dimas, Antigone S; Loos, Ruth J F; Vedantam, Sailaja; Chen, Han; Florez, Jose C; Fox, Caroline; Liu, Ching-Ti; Rybin, Denis; Couper, David J; Kao, Wen Hong L; Li, Man; Cornelis, Marilyn C; Kraft, Peter; Sun, Qi; van Dam, Rob M; Stringham, Heather M; Chines, Peter S; Fischer, Krista; Fontanillas, Pierre; Holmen, Oddgeir L; Hunt, Sarah E; Jackson, Anne U; Kong, Augustine; Lawrence, Robert; Meyer, Julia; Perry, John RB; Platou, Carl GP; Potter, Simon; Rehnberg, Emil; Robertson, Neil; Sivapalaratnam, Suthesh; Stančáková, Alena; Stirrups, Kathleen; Thorleifsson, Gudmar; Tikkanen, Emmi; Wood, Andrew R; Almgren, Peter; Atalay, Mustafa; Benediktsson, Rafn; Bonnycastle, Lori L; Burtt, Noël; Carey, Jason; Charpentier, Guillaume; Crenshaw, Andrew T; Doney, Alex S F; Dorkhan, Mozhgan; Edkins, Sarah; Emilsson, Valur; Eury, Elodie; Forsen, Tom; Gertow, Karl; Gigante, Bruna; Grant, George B; Groves, Christopher J; Guiducci, Candace; Herder, Christian; Hreidarsson, Astradur B; Hui, Jennie; James, Alan; Jonsson, Anna; Rathmann, Wolfgang; Klopp, Norman; Kravic, Jasmina; Krjutškov, Kaarel; Langford, Cordelia; Leander, Karin; Lindholm, Eero; Lobbens, Stéphane; Männistö, Satu; Mirza, Ghazala; Mühleisen, Thomas W; Musk, Bill; Parkin, Melissa; Rallidis, Loukianos; Saramies, Jouko; Sennblad, Bengt; Shah, Sonia; Sigurðsson, Gunnar; Silveira, Angela; Steinbach, Gerald; Thorand, Barbara; Trakalo, Joseph; Veglia, Fabrizio; Wennauer, Roman; Winckler, Wendy; Zabaneh, Delilah; Campbell, Harry; van Duijn, Cornelia; Uitterlinden89-, Andre G; Hofman, Albert; Sijbrands, Eric; Abecasis, Goncalo R; Owen, Katharine R; Zeggini, Eleftheria; Trip, Mieke D; Forouhi, Nita G; Syvänen, Ann-Christine; Eriksson, Johan G; Peltonen, Leena; Nöthen, Markus M; Balkau, Beverley; Palmer, Colin N A; Lyssenko, Valeriya; Tuomi, Tiinamaija; Isomaa, Bo; Hunter, David J; Qi, Lu; Shuldiner, Alan R; Roden, Michael; Barroso, Ines; Wilsgaard, Tom; Beilby, John; Hovingh, Kees; Price, Jackie F; Wilson, James F; Rauramaa, Rainer; Lakka, Timo A; Lind, Lars; Dedoussis, George; Njølstad, Inger; Pedersen, Nancy L; Khaw, Kay-Tee; Wareham, Nicholas J; Keinanen-Kiukaanniemi, Sirkka M; Saaristo, Timo E; Korpi-Hyövälti, Eeva; Saltevo, Juha; Laakso, Markku; Kuusisto, Johanna; Metspalu, Andres; Collins, Francis S; Mohlke, Karen L; Bergman, Richard N; Tuomilehto, Jaakko; Boehm, Bernhard O; Gieger, Christian; Hveem, Kristian; Cauchi, Stephane; Froguel, Philippe; Baldassarre, Damiano; Tremoli, Elena; Humphries, Steve E; Saleheen, Danish; Danesh, John; Ingelsson, Erik; Ripatti, Samuli; Salomaa, Veikko; Erbel, Raimund; Jöckel, Karl-Heinz; Moebus, Susanne; Peters, Annette; Illig, Thomas; de Faire, Ulf; Hamsten, Anders; Morris, Andrew D; Donnelly, Peter J; Frayling, Timothy M; Hattersley, Andrew T; Boerwinkle, Eric; Melander, Olle; Kathiresan, Sekar; Nilsson, Peter M; Deloukas, Panos; Thorsteinsdottir, Unnur; Groop, Leif C; Stefansson, Kari; Hu, Frank; Pankow, James S; Dupuis, Josée; Meigs, James B; Altshuler, David; Boehnke, Michael; McCarthy, Mark I

    2012-01-01

    To extend understanding of the genetic architecture and molecular basis of type 2 diabetes (T2D), we conducted a meta-analysis of genetic variants on the Metabochip involving 34,840 cases and 114,981 controls, overwhelmingly of European descent. We identified ten previously unreported T2D susceptibility loci, including two demonstrating sex-differentiated association. Genome-wide analyses of these data are consistent with a long tail of further common variant loci explaining much of the variation in susceptibility to T2D. Exploration of the enlarged set of susceptibility loci implicates several processes, including CREBBP-related transcription, adipocytokine signalling and cell cycle regulation, in diabetes pathogenesis. PMID:22885922

  8. Cost analysis of large-scale implementation of the 'Helping Babies Breathe' newborn resuscitation-training program in Tanzania.

    Science.gov (United States)

    Chaudhury, Sumona; Arlington, Lauren; Brenan, Shelby; Kairuki, Allan Kaijunga; Meda, Amunga Robson; Isangula, Kahabi G; Mponzi, Victor; Bishanga, Dunstan; Thomas, Erica; Msemo, Georgina; Azayo, Mary; Molinier, Alice; Nelson, Brett D

    2016-12-01

    Helping Babies Breathe (HBB) has become the gold standard globally for training birth-attendants in neonatal resuscitation in low-resource settings in efforts to reduce early newborn asphyxia and mortality. The purpose of this study was to do a first-ever activity-based cost-analysis of at-scale HBB program implementation and initial follow-up in a large region of Tanzania and evaluate costs of national scale-up as one component of a multi-method external evaluation of the implementation of HBB at scale in Tanzania. We used activity-based costing to examine budget expense data during the two-month implementation and follow-up of HBB in one of the target regions. Activity-cost centers included administrative, initial training (including resuscitation equipment), and follow-up training expenses. Sensitivity analysis was utilized to project cost scenarios incurred to achieve countrywide expansion of the program across all mainland regions of Tanzania and to model costs of program maintenance over one and five years following initiation. Total costs for the Mbeya Region were $202,240, with the highest proportion due to initial training and equipment (45.2%), followed by central program administration (37.2%), and follow-up visits (17.6%). Within Mbeya, 49 training sessions were undertaken, involving the training of 1,341 health providers from 336 health facilities in eight districts. To similarly expand the HBB program across the 25 regions of mainland Tanzania, the total economic cost is projected to be around $4,000,000 (around $600 per facility). Following sensitivity analyses, the estimated total for all Tanzania initial rollout lies between $2,934,793 to $4,309,595. In order to maintain the program nationally under the current model, it is estimated it would cost $2,019,115 for a further one year and $5,640,794 for a further five years of ongoing program support. HBB implementation is a relatively low-cost intervention with potential for high impact on perinatal

  9. Analysis on deep metallogenic trace and simulation experiment in xiangshan large-scale volcanic hydrothermal type uranium deposit

    International Nuclear Information System (INIS)

    Liu Zhengyi; Liu Zhangyue; Wen Zhijian; Du Letian

    2010-01-01

    Based on series experiments on field geologic analysis, and associated with deep metallogenic trace experiment model transformed from establishment of field deep metallogenic trace model, this paper come to the conclusion that distribution coefficients of U and Th first domestic from the magmatic experiment, and then discuss the geochemical behaviors of U, Th, K during magmatic evolution stage. The experiment shows that close relationship between U and Na during the hydrothermal alteration stage; and relationship between U and K during metallogenic stage, which prove that U and K are incompatible and regularity of variation between K and Na. The conclusion of uranium dissolving ability increased accompany with pressure increasing in basement metamorphic rocks and host rocks, is obtained from this experiment, which indicate a good deep metallogenic prospect. Furthermore, Pb, Sr, Nd, He isotopes show that the volcanic rocks and basement rocks are ore source beds; due to the combined functions of volcanic hydrothermal and mantle ichor, uranium undergo multi-migration and enrichment and finally concentrated to large rich deposit. (authors)

  10. Characteristics of the Lotus japonicus gene repertoire deduced from large-scale expressed sequence tag (EST) analysis.

    Science.gov (United States)

    Asamizu, Erika; Nakamura, Yasukazu; Sato, Shusei; Tabata, Satoshi

    2004-02-01

    To perform a comprehensive analysis of genes expressed in a model legume, Lotus japonicus, a total of 74472 3'-end expressed sequence tags (EST) were generated from cDNA libraries produced from six different organs. Clustering of sequences was performed with an identity criterion of 95% for 50 bases, and a total of 20457 non-redundant sequences, 8503 contigs and 11954 singletons were generated. EST sequence coverage was analyzed by using the annotated L. japonicus genomic sequence and 1093 of the 1889 predicted protein-encoding genes (57.9%) were hit by the EST sequence(s). Gene content was compared to several plant species. Among the 8503 contigs, 471 were identified as sequences conserved only in leguminous species and these included several disease resistance-related genes. This suggested that in legumes, these genes may have evolved specifically to resist pathogen attack. The rate of gene sequence divergence was assessed by comparing similarity level and functional category based on the Gene Ontology (GO) annotation of Arabidopsis genes. This revealed that genes encoding ribosomal proteins, as well as those related to translation, photosynthesis, and cellular structure were more abundantly represented in the highly conserved class, and that genes encoding transcription factors and receptor protein kinases were abundantly represented in the less conserved class. To make the sequence information and the cDNA clones available to the research community, a Web database with useful services was created at http://www.kazusa.or.jp/en/plant/lotus/EST/.

  11. Deciding where to attend: Large-scale network mechanisms underlying attention and intention revealed by graph-theoretic analysis.

    Science.gov (United States)

    Liu, Yuelu; Hong, Xiangfei; Bengson, Jesse J; Kelley, Todd A; Ding, Mingzhou; Mangun, George R

    2017-08-15

    The neural mechanisms by which intentions are transformed into actions remain poorly understood. We investigated the network mechanisms underlying spontaneous voluntary decisions about where to focus visual-spatial attention (willed attention). Graph-theoretic analysis of two independent datasets revealed that regions activated during willed attention form a set of functionally-distinct networks corresponding to the frontoparietal network, the cingulo-opercular network, and the dorsal attention network. Contrasting willed attention with instructed attention (where attention is directed by external cues), we observed that the dorsal anterior cingulate cortex was allied with the dorsal attention network in instructed attention, but shifted connectivity during willed attention to interact with the cingulo-opercular network, which then mediated communications between the frontoparietal network and the dorsal attention network. Behaviorally, greater connectivity in network hubs, including the dorsolateral prefrontal cortex, the dorsal anterior cingulate cortex, and the inferior parietal lobule, was associated with faster reaction times. These results, shown to be consistent across the two independent datasets, uncover the dynamic organization of functionally-distinct networks engaged to support intentional acts. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Large-Scale Battery System Development and User-Specific Driving Behavior Analysis for Emerging Electric-Drive Vehicles

    Directory of Open Access Journals (Sweden)

    Yihe Sun

    2011-04-01

    Full Text Available Emerging green-energy transportation, such as hybrid electric vehicles (HEVs and plug-in HEVs (PHEVs, has a great potential for reduction of fuel consumption and greenhouse emissions. The lithium-ion battery system used in these vehicles, however, is bulky, expensive and unreliable, and has been the primary roadblock for transportation electrification. Meanwhile, few studies have considered user-specific driving behavior and its significant impact on (PHEV fuel efficiency, battery system lifetime, and the environment. This paper presents a detailed investigation of battery system modeling and real-world user-specific driving behavior analysis for emerging electric-drive vehicles. The proposed model is fast to compute and accurate for analyzing battery system run-time and long-term cycle life with a focus on temperature dependent battery system capacity fading and variation. The proposed solution is validated against physical measurement using real-world user driving studies, and has been adopted to facilitate battery system design and optimization. Using the collected real-world hybrid vehicle and run-time driving data, we have also conducted detailed analytical studies of users’ specific driving patterns and their impacts on hybrid vehicle electric energy and fuel efficiency. This work provides a solid foundation for future energy control with emerging electric-drive applications.

  13. Large-scale structure of a network of co-occurring MeSH terms: statistical analysis of macroscopic properties.

    Directory of Open Access Journals (Sweden)

    Andrej Kastrin

    Full Text Available Concept associations can be represented by a network that consists of a set of nodes representing concepts and a set of edges representing their relationships. Complex networks exhibit some common topological features including small diameter, high degree of clustering, power-law degree distribution, and modularity. We investigated the topological properties of a network constructed from co-occurrences between MeSH descriptors in the MEDLINE database. We conducted the analysis on two networks, one constructed from all MeSH descriptors and another using only major descriptors. Network reduction was performed using the Pearson's chi-square test for independence. To characterize topological properties of the network we adopted some specific measures, including diameter, average path length, clustering coefficient, and degree distribution. For the full MeSH network the average path length was 1.95 with a diameter of three edges and clustering coefficient of 0.26. The Kolmogorov-Smirnov test rejects the power law as a plausible model for degree distribution. For the major MeSH network the average path length was 2.63 edges with a diameter of seven edges and clustering coefficient of 0.15. The Kolmogorov-Smirnov test failed to reject the power law as a plausible model. The power-law exponent was 5.07. In both networks it was evident that nodes with a lower degree exhibit higher clustering than those with a higher degree. After simulated attack, where we removed 10% of nodes with the highest degrees, the giant component of each of the two networks contains about 90% of all nodes. Because of small average path length and high degree of clustering the MeSH network is small-world. A power-law distribution is not a plausible model for the degree distribution. The network is highly modular, highly resistant to targeted and random attack and with minimal dissortativity.

  14. Comparison of children versus adults undergoing mini-percutaneous nephrolithotomy: large-scale analysis of a single institution.

    Directory of Open Access Journals (Sweden)

    Guohua Zeng

    Full Text Available OBJECTIVE: As almost any version of percutaneous nephrolithotomy (PCNL was safely and efficiently applied for adults as well as children without age being a limiting risk factor, the aim of the study was to compare the different characteristics as well as the efficacy, outcome, and safety of the pediatric and adult patients who had undergone mini-PCNL (MPCNL in a single institution. METHODS: We retrospective reviewed 331 renal units in children and 8537 renal units in adults that had undergone MPCNL for upper urinary tract stones between the years of 2000-2012. The safety, efficacy, and outcome were analyzed and compared. RESULTS: The children had a smaller stone size (2.3 vs. 3.1 cm but had smilar stone distribution (number and locations. The children required fewer percutaneous accesses, smaller nephrostomy tract, shorter operative time and less hemoglobin drop. The children also had higher initial stone free rate (SFR (80.4% vs. 78.6% after single session of MPCNL (p0.05. Both groups had low rate of high grade Clavien complications. There was no grade III, IV, V complications and no angiographic embolization required in pediatric group. One important caveat, children who required multiple percutaneous nephrostomy tracts had significant higher transfusion rate than in adults (18.8% vs. 4.5%, p = 0.007. CONCLUSIONS: This contemporary largest-scale analysis confirms that the stone-free rate in pediatric patients is at least as good as in adults without an increase of complication rates. However, multiple percutaneous nephrostomy tracts should be practiced with caution in children.

  15. Test and Analysis Correlation of a Large-Scale, Orthogrid-Stiffened Metallic Cylinder without Weld Lands

    Science.gov (United States)

    Rudd, Michelle T.; Hilburger, Mark W.; Lovejoy, Andrew E.; Lindell, Michael C.; Gardner, Nathaniel W.; Schultz, Marc R.

    2018-01-01

    The NASA Engineering Safety Center (NESC) Shell Buckling Knockdown Factor Project (SBKF) was established in 2007 by the NESC with the primary objective to develop analysis-based buckling design factors and guidelines for metallic and composite launch-vehicle structures.1 A secondary objective of the project is to advance technologies that have the potential to increase the structural efficiency of launch-vehicles. The SBKF Project has determined that weld-land stiffness discontinuities can significantly reduce the buckling load of a cylinder. In addition, the welding process can introduce localized geometric imperfections that can further exacerbate the inherent buckling imperfection sensitivity of the cylinder. Therefore, single-piece barrel fabrication technologies can improve structural efficiency by eliminating these weld-land issues. As part of this effort, SBKF partnered with the Advanced Materials and Processing Branch (AMPB) at NASA Langley Research Center (LaRC), the Mechanical and Fabrication Branch at NASA Marshall Space Flight Center (MSFC), and ATI Forged Products to design and fabricate an 8-ft-diameter orthogrid-stiffened seamless metallic cylinder. The cylinder was subjected to seven subcritical load sequences (load levels that are not intended to induce test article buckling or material failure) and one load sequence to failure. The purpose of this test effort was to demonstrate the potential benefits of building cylindrical structures with no weld lands using the flow-formed manufacturing process. This seamless barrel is the ninth 8-ft-diameter metallic barrel and the first single-piece metallic structure to be tested under this program.

  16. Large scale gene expression meta-analysis reveals tissue-specific, sex-biased gene expression in humans

    Directory of Open Access Journals (Sweden)

    Benjamin Mayne

    2016-10-01

    Full Text Available The severity and prevalence of many diseases are known to differ between the sexes. Organ specific sex-biased gene expression may underpin these and other sexually dimorphic traits. To further our understanding of sex differences in transcriptional regulation, we performed meta-analyses of sex biased gene expression in multiple human tissues. We analysed 22 publicly available human gene expression microarray data sets including over 2500 samples from 15 different tissues and 9 different organs. Briefly, by using an inverse-variance method we determined the effect size difference of gene expression between males and females. We found the greatest sex differences in gene expression in the brain, specifically in the anterior cingulate cortex, (1818 genes, followed by the heart (375 genes, kidney (224 genes, colon (218 genes and thyroid (163 genes. More interestingly, we found different parts of the brain with varying numbers and identity of sex-biased genes, indicating that specific cortical regions may influence sexually dimorphic traits. The majority of sex-biased genes in other tissues such as the bladder, liver, lungs and pancreas were on the sex chromosomes or involved in sex hormone production. On average in each tissue, 32% of autosomal genes that were expressed in a sex-biased fashion contained androgen or estrogen hormone response elements. Interestingly, across all tissues, we found approximately two-thirds of autosomal genes that were sex-biased were not under direct influence of sex hormones. To our knowledge this is the largest analysis of sex-biased gene expression in human tissues to date. We identified many sex-biased genes that were not under the direct influence of sex chromosome genes or sex hormones. These may provide targets for future development of sex-specific treatments for diseases.

  17. Constructing a large-scale 3D Geologic Model for Analysis of the Non-Proliferation Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Wagoner, J; Myers, S

    2008-04-09

    We have constructed a regional 3D geologic model of the southern Great Basin, in support of a seismic wave propagation investigation of the 1993 Nonproliferation Experiment (NPE) at the Nevada Test Site (NTS). The model is centered on the NPE and spans longitude -119.5{sup o} to -112.6{sup o} and latitude 34.5{sup o} to 39.8{sup o}; the depth ranges from the topographic surface to 150 km below sea level. The model includes the southern half of Nevada, as well as parts of eastern California, western Utah, and a portion of northwestern Arizona. The upper crust is constrained by both geologic and geophysical studies, while the lower crust and upper mantle are constrained by geophysical studies. The mapped upper crustal geologic units are Quaternary basin fill, Tertiary deposits, pre-Tertiary deposits, intrusive rocks of all ages, and calderas. The lower crust and upper mantle are parameterized with 5 layers, including the Moho. Detailed geologic data, including surface maps, borehole data, and geophysical surveys, were used to define the geology at the NTS. Digital geologic outcrop data were available for both Nevada and Arizona, whereas geologic maps for California and Utah were scanned and hand-digitized. Published gravity data (2km spacing) were used to determine the thickness of the Cenozoic deposits and thus estimate the depth of the basins. The free surface is based on a 10m lateral resolution DEM at the NTS and a 90m lateral resolution DEM elsewhere. Variations in crustal thickness are based on receiver function analysis and a framework compilation of reflection/refraction studies. We used Earthvision (Dynamic Graphics, Inc.) to integrate the geologic and geophysical information into a model of x,y,z,p nodes, where p is a unique integer index value representing the geologic unit. For seismic studies, the geologic units are mapped to specific seismic velocities. The gross geophysical structure of the crust and upper mantle is taken from regional surface

  18. Large-scale grid management

    International Nuclear Information System (INIS)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-01-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series

  19. Improved L-BFGS diagonal preconditioners for a large-scale 4D-Var inversion system: application to CO2 flux constraints and analysis error calculation

    Science.gov (United States)

    Bousserez, Nicolas; Henze, Daven; Bowman, Kevin; Liu, Junjie; Jones, Dylan; Keller, Martin; Deng, Feng

    2013-04-01

    This work presents improved analysis error estimates for 4D-Var systems. From operational NWP models to top-down constraints on trace gas emissions, many of today's data assimilation and inversion systems in atmospheric science rely on variational approaches. This success is due to both the mathematical clarity of these formulations and the availability of computationally efficient minimization algorithms. However, unlike Kalman Filter-based algorithms, these methods do not provide an estimate of the analysis or forecast error covariance matrices, these error statistics being propagated only implicitly by the system. From both a practical (cycling assimilation) and scientific perspective, assessing uncertainties in the solution of the variational problem is critical. For large-scale linear systems, deterministic or randomization approaches can be considered based on the equivalence between the inverse Hessian of the cost function and the covariance matrix of analysis error. For perfectly quadratic systems, like incremental 4D-Var, Lanczos/Conjugate-Gradient algorithms have proven to be most efficient in generating low-rank approximations of the Hessian matrix during the minimization. For weakly non-linear systems though, the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS), a quasi-Newton descent algorithm, is usually considered the best method for the minimization. Suitable for large-scale optimization, this method allows one to generate an approximation to the inverse Hessian using the latest m vector/gradient pairs generated during the minimization, m depending upon the available core memory. At each iteration, an initial low-rank approximation to the inverse Hessian has to be provided, which is called preconditioning. The ability of the preconditioner to retain useful information from previous iterations largely determines the efficiency of the algorithm. Here we assess the performance of different preconditioners to estimate the inverse Hessian of a

  20. Equipment of visualization environment of a large-scale structural analysis system. Visualization using AVS/Express of an ADVENTURE system

    International Nuclear Information System (INIS)

    Miyazaki, Mikiya

    2004-02-01

    The data display work of visualization is done in many research fields, and a lot of special softwares for the specific purposes exist today. But such softwares have an interface to only a small number of solvers. In many simulations, data conversion for visualization software is required between analysis and visualization for the practical use. This report describes the equipment work of the data visualization environment where AVS/Express was installed in corresponding to many requests from the users of the large-scale structural analysis system which is prepared as an ITBL community software. This environment enables to use the ITBL visualization server as a visualization device after the computation on the ITBL computer. Moreover, a lot of use will be expected within the community in the ITBL environment by merging it into ITBL/AVS environment in the future. (author)

  1. Laser ablation inductively coupled plasma dynamic reaction cell mass spectrometry for the multi-element analysis of polymers

    Science.gov (United States)

    Resano, M.; García-Ruiz, E.; Vanhaecke, F.

    2005-11-01

    In this work, the potential of laser ablation-inductively coupled plasma-mass spectrometry for the fast analysis of polymers has been explored. Different real-life samples (polyethylene shopping bags, an acrylonitrile butadiene styrene material and various plastic bricks) as well as several reference materials (VDA 001 to 004, Cd in polyethylene) have been selected for the study. Two polyethylene reference materials (ERM-EC 680 and 681), for which a reference or indicative value for the most relevant metals is available, have proved their suitability as standards for calibration. Special attention has been paid to the difficulties expected for the determination of Cr at the μg g - 1 level in this kind of materials, due to the interference of ArC + ions on the most abundant isotopes of Cr. The use of ammonia as a reaction gas in a dynamic reaction cell is shown to alleviate this problem, resulting in a limit of detection of 0.15 μg g - 1 for this element, while limiting only modestly the possibilities of the technique for simultaneous multi-element analysis. In this regard, As is the analyte most seriously affected by the use of ammonia, and its determination has to be carried out in vented mode, at the expense of measuring time. In all cases studied, accurate results could be obtained for elements ranging in content from the sub-μg g - 1 level to tens of thousands of μg g - 1 . However, the use of an element of known concentration as internal standard may be needed for materials with a matrix significantly different from that of the standard (polyethylene in this work). Precision ranged between 5% and 10% RSD for elements found at the 10 μg g - 1 level or higher, while this value could deteriorate to 20% for analytes found at the sub-μg g - 1 level. Overall, the technique evaluated presents many advantages for the fast and accurate multi-element analysis of these materials, avoiding laborious digestion procedures and minimizing the risk of analyte losses due

  2. The multi-element probabilistic collocation method (ME-PCM): Error analysis and applications

    International Nuclear Information System (INIS)

    Foo, Jasmine; Wan Xiaoliang; Karniadakis, George Em

    2008-01-01

    Stochastic spectral methods are numerical techniques for approximating solutions to partial differential equations with random parameters. In this work, we present and examine the multi-element probabilistic collocation method (ME-PCM), which is a generalized form of the probabilistic collocation method. In the ME-PCM, the parametric space is discretized and a collocation/cubature grid is prescribed on each element. Both full and sparse tensor product grids based on Gauss and Clenshaw-Curtis quadrature rules are considered. We prove analytically and observe in numerical tests that as the parameter space mesh is refined, the convergence rate of the solution depends on the quadrature rule of each element only through its degree of exactness. In addition, the L 2 error of the tensor product interpolant is examined and an adaptivity algorithm is provided. Numerical examples demonstrating adaptive ME-PCM are shown, including low-regularity problems and long-time integration. We test the ME-PCM on two-dimensional Navier-Stokes examples and a stochastic diffusion problem with various random input distributions and up to 50 dimensions. While the convergence rate of ME-PCM deteriorates in 50 dimensions, the error in the mean and variance is two orders of magnitude lower than the error obtained with the Monte Carlo method using only a small number of samples (e.g., 100). The computational cost of ME-PCM is found to be favorable when compared to the cost of other methods including stochastic Galerkin, Monte Carlo and quasi-random sequence methods

  3. Large scale cross hole testing

    International Nuclear Information System (INIS)

    Ball, J.K.; Black, J.H.; Doe, T.

    1991-05-01

    As part of the Site Characterisation and Validation programme the results of the large scale cross hole testing have been used to document hydraulic connections across the SCV block, to test conceptual models of fracture zones and obtain hydrogeological properties of the major hydrogeological features. The SCV block is highly heterogeneous. This heterogeneity is not smoothed out even over scales of hundreds of meters. Results of the interpretation validate the hypothesis of the major fracture zones, A, B and H; not much evidence of minor fracture zones is found. The uncertainty in the flow path, through the fractured rock, causes sever problems in interpretation. Derived values of hydraulic conductivity were found to be in a narrow range of two to three orders of magnitude. Test design did not allow fracture zones to be tested individually. This could be improved by testing the high hydraulic conductivity regions specifically. The Piezomac and single hole equipment worked well. Few, if any, of the tests ran long enough to approach equilibrium. Many observation boreholes showed no response. This could either be because there is no hydraulic connection, or there is a connection but a response is not seen within the time scale of the pumping test. The fractional dimension analysis yielded credible results, and the sinusoidal testing procedure provided an effective means of identifying the dominant hydraulic connections. (10 refs.) (au)

  4. The Software Reliability of Large Scale Integration Circuit and Very Large Scale Integration Circuit

    OpenAIRE

    Artem Ganiyev; Jan Vitasek

    2010-01-01

    This article describes evaluation method of faultless function of large scale integration circuits (LSI) and very large scale integration circuits (VLSI). In the article there is a comparative analysis of factors which determine faultless of integrated circuits, analysis of already existing methods and model of faultless function evaluation of LSI and VLSI. The main part describes a proposed algorithm and program for analysis of fault rate in LSI and VLSI circuits.

  5. A Normalization-Free and Nonparametric Method Sharpens Large-Scale Transcriptome Analysis and Reveals Common Gene Alteration Patterns in Cancers.

    Science.gov (United States)

    Li, Qi-Gang; He, Yong-Han; Wu, Huan; Yang, Cui-Ping; Pu, Shao-Yan; Fan, Song-Qing; Jiang, Li-Ping; Shen, Qiu-Shuo; Wang, Xiao-Xiong; Chen, Xiao-Qiong; Yu, Qin; Li, Ying; Sun, Chang; Wang, Xiangting; Zhou, Jumin; Li, Hai-Peng; Chen, Yong-Bin; Kong, Qing-Peng

    2017-01-01

    Heterogeneity in transcriptional data hampers the identification of differentially expressed genes (DEGs) and understanding of cancer, essentially because current methods rely on cross-sample normalization and/or distribution assumption-both sensitive to heterogeneous values. Here, we developed a new method, Cross-Value Association Analysis (CVAA), which overcomes the limitation and is more robust to heterogeneous data than the other methods. Applying CVAA to a more complex pan-cancer dataset containing 5,540 transcriptomes discovered numerous new DEGs and many previously rarely explored pathways/processes; some of them were validated, both in vitro and in vivo , to be crucial in tumorigenesis, e.g., alcohol metabolism ( ADH1B ), chromosome remodeling ( NCAPH ) and complement system ( Adipsin ). Together, we present a sharper tool to navigate large-scale expression data and gain new mechanistic insights into tumorigenesis.

  6. Multielement analysis of archaic Chinese bronze and antique coins by fast neutron activation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Tian, Y.H. (Academia Sinica, Lanzhou, Gansu (China). Inst. of Modern Physics); Pepelnik, R.; Fanger, H.U. (GKSS-Forschungszentrum Geesthacht GmbH, Geesthacht-Tesperhude (Germany, F.R.). Inst. fuer Physik)

    1990-01-01

    Samples of archaic bronze have been investigated by fast neutron activation analysis using both the absolute and relative method. The components Cu, Zn, Sn and Pb have been determined quantitatively. For the detection of lead via the short-lived isomeric state {sup 207m}Pb, cyclic activation and measurement technique was used with pneumatic sample transfer between detector and central irradiation position of the neutron tube. For non-destructive analysis of antique Chinese coins the samples had to be irradiated outside the neutron generator KORONA. The activation reactions, the evaluation of the elemental concentrations and the accuracy of the results are discussed. The data were corrected for {gamma}-ray self-absorption in the samples and summing of coincident {gamma}-rays in the detector. According to reported typical compositions of Chinese bronze from different dynasties, the age of the samples has been derived from the results obtained. (orig.).

  7. Multielement characterization of atmospheric pollutants by x-ray fluorescence analysis and instrumental neutron activation analysis

    International Nuclear Information System (INIS)

    Rancitelli, L.A.; Tanner, T.M.

    1976-01-01

    The simultaneous measurement of a wide spectrum of elements in aerosols collected on air filters and in rainwater can yield information on the origin, transport, and removal of atmospheric pollutants. In order to determine the elemental content of these aerosols, a pair of highly sensitive, precise and complementing instrumental techniques, x-ray fluorescence and neutron activation analysis, have been developed and employed. Data are presented on the results of combined x-ray fluorescence and activation analysis of aerosols collected in a number of urban areas of the USA and from the 80th median sampling network in March 1972. From a comparison of these ratios in granite and diabase with those of filters placed in urban areas, it is evident that Zn, Se, Sb, Hg, and Pb levels have been increased by as much as several orders of magnitude. Al, Co, La, Fe, Eu, Sm, Tb, Ta, Hf, and Th appear to exist at levels compatible with an earth's crust origin

  8. Geomorphic and hydraulic controls on large-scale riverbank failure on a mixed bedrock-alluvial river system, the River Murray, South Australia: a bathymetric analysis.

    Science.gov (United States)

    De Carli, E.; Hubble, T.

    2014-12-01

    During the peak of the Millennium Drought (1997-2010) pool-levels in the lower River Murray in South Australia dropped 1.5 metres below sea level, resulting in large-scale mass failure of the alluvial banks. The largest of these failures occurred without signs of prior instability at Long Island Marina whereby a 270 metre length of populated and vegetated riverbank collapsed in a series of rotational failures. Analysis of long-reach bathymetric surveys of the river channel revealed a strong relationship between geomorphic and hydraulic controls on channel width and downstream alluvial failure. As the entrenched channel planform meanders within and encroaches upon its bedrock valley confines the channel width is 'pinched' and decreases by up to half, resulting in a deepening thalweg and channel bed incision. The authors posit that flow and shear velocities increase at these geomorphically controlled 'pinch-points' resulting in complex and variable hydraulic patterns such as erosional scour eddies, which act to scour the toe of the slope over-steepening and destabilising the alluvial margins. Analysis of bathymetric datasets between 2009 and 2014 revealed signs of active incision and erosional scour of the channel bed. This is counter to conceptual models which deem the backwater zone of a river to be one of decelerating flow and thus sediment deposition. Complex and variable flow patterns have been observed in other mixed alluvial-bedrock river systems, and signs of active incision observed in the backwater zone of the Mississippi River, United States. The incision and widening of the lower Murray River suggests the channel is in an erosional phase of channel readjustment which has implications for riverbank collapse on the alluvial margins. The prevention of seawater ingress due to barrage construction at the Murray mouth and Southern Ocean confluence, allowed pool-levels to drop significantly during the Millennium Drought reducing lateral confining support to the

  9. Multielement analysis of Nigerian chewing sticks by instrumental neutron activation analysis

    International Nuclear Information System (INIS)

    Asubiojo, O.I.; Guinn, V.P.

    1982-01-01

    In Nigeria, various parts of various species of native plants have long been used for dental hygiene, with reportedly considerable effectiveness. These materials are known as 'chewing sticks'. This study was an effort to ascertain whether any unusual trace element concentrations might be present in Nigerian chewing sticks. Results are presented for 17 elements (Na, Mg, Al, Cl, K, Ca, Sc, V, Mn, Fe, Co, Zn, Br, Cs, La, Sm, Au) detected and measured in 12 species of such plants, via instrumental thermal-neutron activation analysis. (author)

  10. Multielement analysis of Nigerian chewing sticks by instrumental neutron activation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Asubiojo, O.I.; Guinn, V.P. (California Univ., Irvine (USA). Dept. of Chemistry); Okunuga, A. (California Polytechnic Univ., Pomona, CA (USA))

    1982-01-01

    In Nigeria, various parts of various species of native plants have long been used for dental hygiene, with reportedly considerable effectiveness. These materials are known as 'chewing sticks'. This study was an effort to ascertain whether any unusual trace element concentrations might be present in Nigerian chewing sticks. Results are presented for 17 elements (Na, Mg, Al, Cl, K, Ca, Sc, V, Mn, Fe, Co, Zn, Br, Cs, La, Sm, Au) detected and measured in 12 species of such plants, via instrumental thermal-neutron activation analysis.

  11. Multi-element analysis of the obese subject by in vivo neutron activation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Siwek, R A; Burkinshaw, L; Oxby, C B [Leeds General Infirmary (UK); Robinson, P A.J. [Saint James' s University Hospital, Leeds (UK)

    1984-06-01

    The Leeds facility for in vivo neutron activation analysis has been modified and calibrated for the simultaneous measurement of nitrogen, potassium, sodium, chlorine, phosphorus and calcium in obese patients weighing up to 210 kg. The effects of body size and shape were incorporated into the calibration by measuring 14 anthropomorphic phantoms of known composition representing individual patients being treated for obesity. The phantoms were constructed from tissue substitutes representing lean skeletal and adipose tissues, arranged to simulate the distributions of the corresponding tissues within the patients, as visualised by CT scanning. The precision of the method, determined by measuring a single phantom ten times over a period of ten weeks, is between two and three per cent for all elements except calcium, for which it is 11.3%. Accuracy is estimated to be similar to precision. The procedure has been used to study changes in body composition of patients undergoing therapeutic starvation.

  12. Large scale structure and baryogenesis

    International Nuclear Information System (INIS)

    Kirilova, D.P.; Chizhov, M.V.

    2001-08-01

    We discuss a possible connection between the large scale structure formation and the baryogenesis in the universe. An update review of the observational indications for the presence of a very large scale 120h -1 Mpc in the distribution of the visible matter of the universe is provided. The possibility to generate a periodic distribution with the characteristic scale 120h -1 Mpc through a mechanism producing quasi-periodic baryon density perturbations during inflationary stage, is discussed. The evolution of the baryon charge density distribution is explored in the framework of a low temperature boson condensate baryogenesis scenario. Both the observed very large scale of a the visible matter distribution in the universe and the observed baryon asymmetry value could naturally appear as a result of the evolution of a complex scalar field condensate, formed at the inflationary stage. Moreover, for some model's parameters a natural separation of matter superclusters from antimatter ones can be achieved. (author)

  13. Multielement analysis of rice flour-unpolished reference material by instrumental neutron activation analysis

    International Nuclear Information System (INIS)

    Suzuki, Shogo; Hirai, Shoji

    1990-01-01

    Trace elements in NIES certified reference material No. 10-a∼10-c Rice Flour-Unpolished, prepared by the National Institute for Environmental Studies of Japan (NIES), were determined by instrumental neutron activation analysis (INAA). A set of three samples with different Cd concentration levels was subjected to analyses. Portions of each sample (ca. 200∼1000 mg) were irradiated, either with thermal neutrons without cadmium filter or with epithermal neutrons with cadmium filter, in the Musashi Institute of Technology Research Reactor (MITRR). The activated samples were analyzed by the three methods; conventional γ-ray spectrometry using a coaxial Ge detector, anticoincidence counting spectrometry, and coincidence counting spectrometry using a coaxial Ge detector and a well type NaI(Tl) detector. Concentrations of 26∼28 elements were determined by these methods. The values obtained for many elements, except for Mg and K, were in good agreement with those of the NIES certified and reference. Concentrations of 10 elements (S, Sc, V, Ag, Sb, Cs, Ba, La, Sm, Th), whose certified or reference values are not available from NIES, were also determined in this work. (author)

  14. Dynamics of Large-Scale Solar-Wind Streams Obtained by the Double Superposed Epoch Analysis: 2. Comparisons of CIRs vs. Sheaths and MCs vs. Ejecta

    Science.gov (United States)

    Yermolaev, Y. I.; Lodkina, I. G.; Nikolaeva, N. S.; Yermolaev, M. Y.

    2017-12-01

    This work is a continuation of our previous article (Yermolaev et al. in J. Geophys. Res. 120, 7094, 2015), which describes the average temporal profiles of interplanetary plasma and field parameters in large-scale solar-wind (SW) streams: corotating interaction regions (CIRs), interplanetary coronal mass ejections (ICMEs including both magnetic clouds (MCs) and ejecta), and sheaths as well as interplanetary shocks (ISs). As in the previous article, we use the data of the OMNI database, our catalog of large-scale solar-wind phenomena during 1976 - 2000 (Yermolaev et al. in Cosmic Res., 47, 2, 81, 2009) and the method of double superposed epoch analysis (Yermolaev et al. in Ann. Geophys., 28, 2177, 2010a). We rescale the duration of all types of structures in such a way that the beginnings and endings for all of them coincide. We present new detailed results comparing pair phenomena: 1) both types of compression regions ( i.e. CIRs vs. sheaths) and 2) both types of ICMEs (MCs vs. ejecta). The obtained data allow us to suggest that the formation of the two types of compression regions responds to the same physical mechanism, regardless of the type of piston (high-speed stream (HSS) or ICME); the differences are connected to the geometry ( i.e. the angle between the speed gradient in front of the piston and the satellite trajectory) and the jumps in speed at the edges of the compression regions. In our opinion, one of the possible reasons behind the observed differences in the parameters in MCs and ejecta is that when ejecta are observed, the satellite passes farther from the nose of the area of ICME than when MCs are observed.

  15. Quantitative analysis on the environmental impact of large-scale water transfer project on water resource area in a changing environment

    Directory of Open Access Journals (Sweden)

    D. H. Yan

    2012-08-01

    Full Text Available The interbasin long-distance water transfer project is key support for the reasonable allocation of water resources in a large-scale area, which can optimize the spatio-temporal change of water resources to secure the amount of water available. Large-scale water transfer projects have a deep influence on ecosystems; besides, global climate change causes uncertainty and additive effect of the environmental impact of water transfer projects. Therefore, how to assess the ecological and environmental impact of megaprojects in both construction and operation phases has triggered a lot of attention. The water-output area of the western route of China's South-North Water Transfer Project was taken as the study area of the present article. According to relevant evaluation principles and on the basis of background analysis, we identified the influencing factors and established the diagnostic index system. The climate-hydrology-ecology coupled simulation model was used to simulate and predict ecological and environmental responses of the water resource area in a changing environment. The emphasis of impact evaluation was placed on the reservoir construction and operation scheduling, representative river corridors and wetlands, natural reserves and the water environment below the dam sites. In the end, an overall evaluation of the comprehensive influence of the project was conducted. The research results were as follows: the environmental impacts of the western route project in the water resource area were concentrated on two aspects: the permanent destruction of vegetation during the phase of dam construction and river impoundment, and the significant influence on the hydrological situation of natural river corridor after the implementation of water extraction. The impact on local climate, vegetation ecology, typical wetlands, natural reserves and the water environment of river basins below the dam sites was small.

  16. Quantitative analysis on the environmental impact of large-scale water transfer project on water resource area in a changing environment

    Science.gov (United States)

    Yan, D. H.; Wang, H.; Li, H. H.; Wang, G.; Qin, T. L.; Wang, D. Y.; Wang, L. H.

    2012-08-01

    The interbasin long-distance water transfer project is key support for the reasonable allocation of water resources in a large-scale area, which can optimize the spatio-temporal change of water resources to secure the amount of water available. Large-scale water transfer projects have a deep influence on ecosystems; besides, global climate change causes uncertainty and additive effect of the environmental impact of water transfer projects. Therefore, how to assess the ecological and environmental impact of megaprojects in both construction and operation phases has triggered a lot of attention. The water-output area of the western route of China's South-North Water Transfer Project was taken as the study area of the present article. According to relevant evaluation principles and on the basis of background analysis, we identified the influencing factors and established the diagnostic index system. The climate-hydrology-ecology coupled simulation model was used to simulate and predict ecological and environmental responses of the water resource area in a changing environment. The emphasis of impact evaluation was placed on the reservoir construction and operation scheduling, representative river corridors and wetlands, natural reserves and the water environment below the dam sites. In the end, an overall evaluation of the comprehensive influence of the project was conducted. The research results were as follows: the environmental impacts of the western route project in the water resource area were concentrated on two aspects: the permanent destruction of vegetation during the phase of dam construction and river impoundment, and the significant influence on the hydrological situation of natural river corridor after the implementation of water extraction. The impact on local climate, vegetation ecology, typical wetlands, natural reserves and the water environment of river basins below the dam sites was small.

  17. Application of inductively coupled plasma mass spectrometry for multielement analysis in small sample amounts of thyroid tissue from Chernobyl area

    International Nuclear Information System (INIS)

    Becker, J.S.; Dietze, H.J.; Boulyga, S.F.; Bazhanova, N.N.; Kanash, N.V.; Malenchenko, A.F.

    2000-01-01

    As a result of the Chernobyl nuclear power plant accident in 1986, thyroid pathologies occurred among children in some regions of belarus. Besides the irradiation of children's thyroids by radioactive iodine and caesium nuclides, toxic elements from fallout are a direct risk to health. Inductively coupled plasma quadrupole-based mass spectrometry (Icp-Ms) and instrumental neutron activation analysis (IAA) were used for multielement determination in small amounts (I-10 mg) of human thyroid tissue samples. The accuracy of the applied analytical technique for small biological sample amounts was checked using NIST standard reference material oyster tissue (SRM 1566 b). Almost all essential elements as well as a number of toxic elements such as Cd, Pb, Hg, U etc. Were determined in a multitude of human thyroid tissues by quadrupole-based Icp-Ms using micro nebulization. In general, the thyroid tissue affected by pathology is characterized by higher calcium content. Some other elements, among them Sr, Zn, Fe, Mn, V, As, Cr, Ni, Pb, U, Ba, Sb, were also Accumulated in such tissue. The results obtained will be used as initial material for further specific studies of the role of particular elements in thyroid pathology development

  18. Growth Limits in Large Scale Networks

    DEFF Research Database (Denmark)

    Knudsen, Thomas Phillip

    limitations. The rising complexity of network management with the convergence of communications platforms is shown as problematic for both automatic management feasibility and for manpower resource management. In the fourth step the scope is extended to include the present society with the DDN project as its......The Subject of large scale networks is approached from the perspective of the network planner. An analysis of the long term planning problems is presented with the main focus on the changing requirements for large scale networks and the potential problems in meeting these requirements. The problems...... the fundamental technological resources in network technologies are analysed for scalability. Here several technological limits to continued growth are presented. The third step involves a survey of major problems in managing large scale networks given the growth of user requirements and the technological...

  19. Economic analysis of a new class of vanadium redox-flow battery for medium- and large-scale energy storage in commercial applications with renewable energy

    International Nuclear Information System (INIS)

    Li, Ming-Jia; Zhao, Wei; Chen, Xi; Tao, Wen-Quan

    2017-01-01

    Highlights: • A new class of the vanadium redox-flow battery (VRB) is developed. • The new class of VRB is more economic. It is simple process and easy to scale-up. • There are three levels of cell stacks and electrolytes with different qualities. • The economic analysis of the VRB system for renewable energy bases is carried out. • Related polices and suggestions based on the result are provided. - Abstract: Interest in the implement of vanadium redox-flow battery (VRB) for energy storage is growing, which is widely applicable to large-scale renewable energy (e.g. wind energy and solar photo-voltaic), developing distributed generation, lowering the imbalance and increasing the usage of electricity. However, a comprehensive economic analysis of the VRB for energy storage is obscured for various commercial applications, yet it is fundamental for implementation of the VRB in commercial electricity markets. In this study, based on a new class of the VRB that was developed by our team, a comprehensive economic analysis of the VRB for large-scale energy storage is carried out. The results illustrate the economy of the VRB applications for three typical energy systems: (1) The VRB storage system instead of the normal lead-acid battery to be the uninterrupted power supply (UPS) battery for office buildings and hospitals; (2) Application of vanadium battery in household distributed photo-voltaic power generation systems; (3) The wind power and solar power stations equipped with the VRB storage systems. The economic perspectives and cost-benefit analysis of the VRB storage systems may underpin optimisation for maximum profitability. In this case, two findings are concluded. First, with the fixed capacity power or fixed discharging time, the greater profit ratio will be generated from the longer time or the larger capacity power. Second, when the profit ratio, discharging time and capacity power are all variables, it is necessary to find out the best optimisation

  20. Large-scale evaluation of dynamically important residues in proteins predicted by the perturbation analysis of a coarse-grained elastic model

    Directory of Open Access Journals (Sweden)

    Tekpinar Mustafa

    2009-07-01

    Full Text Available Abstract Backgrounds It is increasingly recognized that protein functions often require intricate conformational dynamics, which involves a network of key amino acid residues that couple spatially separated functional sites. Tremendous efforts have been made to identify these key residues by experimental and computational means. Results We have performed a large-scale evaluation of the predictions of dynamically important residues by a variety of computational protocols including three based on the perturbation and correlation analysis of a coarse-grained elastic model. This study is performed for two lists of test cases with >500 pairs of protein structures. The dynamically important residues predicted by the perturbation and correlation analysis are found to be strongly or moderately conserved in >67% of test cases. They form a sparse network of residues which are clustered both in 3D space and along protein sequence. Their overall conservation is attributed to their dynamic role rather than ligand binding or high network connectivity. Conclusion By modeling how the protein structural fluctuations respond to residue-position-specific perturbations, our highly efficient perturbation and correlation analysis can be used to dissect the functional conformational changes in various proteins with a residue level of detail. The predictions of dynamically important residues serve as promising targets for mutational and functional studies.

  1. Numerical Analysis of Consolidation Settlement and Creep Deformation of Artificial Island Revetment Structure in a Large-Scale Marine Reclamation Land Project

    Directory of Open Access Journals (Sweden)

    Jie Zhao

    2015-09-01

    Full Text Available In order to analyze the influential factors of soft foundation settlement in a marine reclamation land project, the consolidation settlement and pore pressure dissipation of the entire area are numerically simulated using Soft-Soil- Creep Model, in which the PLAXIS finite element software for professional geotechnical engineering is applied and empirical data of Japanese Kansai’s airport project are used. Moreover, the figures of settlement and pore pressure results in the different basic period are drawn, and the corresponding analysis conclusions are ob-tained based on the comparison among the results from the computational parameters of depth. In addition,, the influence rules of various parameters on settlement results is concluded through running the parameter sensitivity analysis in Soft-Soil-Creep Model, and the experience and conclusions can be for reference in the design and con-struction of similar large-scale marine reclamation land project. Also the empirical value method of the creep index has not been applied widely. Further research needs to be done.

  2. Breakdowns in coordinated decision making at and above the incident management team level: an analysis of three large scale Australian wildfires.

    Science.gov (United States)

    Bearman, Chris; Grunwald, Jared A; Brooks, Benjamin P; Owen, Christine

    2015-03-01

    Emergency situations are by their nature difficult to manage and success in such situations is often highly dependent on effective team coordination. Breakdowns in team coordination can lead to significant disruption to an operational response. Breakdowns in coordination were explored in three large-scale bushfires in Australia: the Kilmore East fire, the Wangary fire, and the Canberra Firestorm. Data from these fires were analysed using a top-down and bottom-up qualitative analysis technique. Forty-four breakdowns in coordinated decision making were identified, which yielded 83 disconnects grouped into three main categories: operational, informational and evaluative. Disconnects were specific instances where differences in understanding existed between team members. The reasons why disconnects occurred were largely consistent across the three sets of data. In some cases multiple disconnects occurred in a temporal manner, which suggested some evidence of disconnects creating states that were conducive to the occurrence of further disconnects. In terms of resolution, evaluative disconnects were nearly always resolved however operational and informational disconnects were rarely resolved effectively. The exploratory data analysis and discussion presented here represents the first systematic research to provide information about the reasons why breakdowns occur in emergency management and presents an account of how team processes can act to disrupt coordination and the operational response. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  3. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  4. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  5. Determinant factors of residential consumption and perception of energy conservation: Time-series analysis by large-scale questionnaire in Suita, Japan

    International Nuclear Information System (INIS)

    Hara, Keishiro; Uwasu, Michinori; Kishita, Yusuke; Takeda, Hiroyuki

    2015-01-01

    In this study, we examined determinant factors associated with the residential consumption and perception of savings of electricity and city gas; this was based on data collected from a large-scale questionnaire sent to households in Suita, Osaka Prefecture, Japan, in two different years: 2009 and 2013. We applied an ordered logit model to determine the overall trend of the determinant factors, and then we performed a more detailed analysis in order to understand the reasons why the determinant factors changed between the two periods. Results from the ordered logit model reveal that electricity and gas consumption was primarily determined by such factors as household income, number of family members, the number of home appliances, and the perceptions of energy savings; there was not much difference between the two years, although in 2013, household income did not affect the perception of energy savings. Detailed analysis demonstrated that households with high energy consumption and those with moderate consumption are becoming polarized and that there was a growing gap between consumption behavior and the perception of conservation. The implications derived from the analyses provide an essential insight into the design of a municipal policy to induce lifestyle changes for an energy-saving society. - Highlights: • Questionnaire was conducted to households in two years for time-series analysis. • We analyzed residential energy consumption and perception of savings in households. • Determinant factors for consumption and perception of savings were identified. • Households being wasteful of energy are also found willing to cut consumption. • Policy intervention could affect consumption pattern and perception of savings.

  6. Nondestructive multielement analyses of airborne particulates by combined uses of instrumental neutron activation analysis and energy dispersive X-ray fluorescence analysis

    International Nuclear Information System (INIS)

    Mamuro, Tetsuo; Matsuda, Yatsuka; Mizohata, Akira

    1974-01-01

    Combined uses of instrumental neutron activation analysis and energy dispersive X-ray fluorescence analysis make it possible to analyze nondestructively a considerably large number of elements in airborne particulates. We have confirmed that up to 45 elements can be analyzed without any chemical procedures for urban airborne particulate samples. As the radiation spectrometry by semiconductor detectors and the automatic data reduction by electronic computation are quite common to the two techniques, combined uses of them produce no special annoyance. Several elements can be analyzed by both of them and therefore the reliability of the analytical results can be comfirmed by comparing the data obtained by them with each other. It is noted that this confirmation can be made for the very same sample. In this article are described our experiences of multielement analyses of airborne particulates and some problems to be solved in further studies. (auth.)

  7. Large-scale analysis of protein expression changes in human keratinocytes immortalized by human papilloma virus type 16 E6 and E7 oncogenes

    Directory of Open Access Journals (Sweden)

    Arnouk Hilal

    2009-08-01

    Full Text Available Abstract Background Infection with high-risk type human papilloma viruses (HPVs is associated with cervical carcinomas and with a subset of head and neck squamous cell carcinomas. Viral E6 and E7 oncogenes cooperate to achieve cell immortalization by a mechanism that is not yet fully understood. Here, human keratinocytes were immortalized by long-term expression of HPV type 16 E6 or E7 oncoproteins, or both. Proteomic profiling was used to compare expression levels for 741 discrete protein features. Results Six replicate measurements were performed for each group using two-dimensional difference gel electrophoresis (2D-DIGE. The median within-group coefficient of variation was 19–21%. Significance of between-group differences was tested based on Significance Analysis of Microarray and fold change. Expression of 170 (23% of the protein features changed significantly in immortalized cells compared to primary keratinocytes. Most of these changes were qualitatively similar in cells immortalized by E6, E7, or E6/7 expression, indicating convergence on a common phenotype, but fifteen proteins (~2% were outliers in this regulatory pattern. Ten demonstrated opposite regulation in E6- and E7-expressing cells, including the cell cycle regulator p16INK4a; the carbohydrate binding protein Galectin-7; two differentially migrating forms of the intermediate filament protein Cytokeratin-7; HSPA1A (Hsp70-1; and five unidentified proteins. Five others had a pattern of expression that suggested cooperativity between the co-expressed oncoproteins. Two of these were identified as forms of the small heat shock protein HSPB1 (Hsp27. Conclusion This large-scale analysis provides a framework for understanding the cooperation between E6 and E7 oncoproteins in HPV-driven carcinogenesis.

  8. Japanese large-scale interferometers

    CERN Document Server

    Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K

    2002-01-01

    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.

  9. Seismic texture and amplitude analysis of large scale fluid escape pipes using time lapses seismic surveys: examples from the Loyal Field (Scotland, UK)

    Science.gov (United States)

    Maestrelli, Daniele; Jihad, Ali; Iacopini, David; Bond, Clare

    2016-04-01

    ) affected by large scale fracture (semblance image) and seem consistent with a suspended mud/sand mixture non-fluidized fluid flow. Near-Middle-Far offsets amplitude analysis confirms that most of the amplitude anomalies within the pipes conduit and terminus are only partly related to gas. An interpretation of the possible texture observed is proposed with a discussion of the noise and artefact induced by resolution and migration problems. Possible hypothetical formation mechanisms for those Pipes are discussed.

  10. Identification and functional characterization of HIV-associated neurocognitive disorders with large-scale Granger causality analysis on resting-state functional MRI

    Science.gov (United States)

    Chockanathan, Udaysankar; DSouza, Adora M.; Abidin, Anas Z.; Schifitto, Giovanni; Wismüller, Axel

    2018-02-01

    Resting-state functional MRI (rs-fMRI), coupled with advanced multivariate time-series analysis methods such as Granger causality, is a promising tool for the development of novel functional connectivity biomarkers of neurologic and psychiatric disease. Recently large-scale Granger causality (lsGC) has been proposed as an alternative to conventional Granger causality (cGC) that extends the scope of robust Granger causal analyses to high-dimensional systems such as the human brain. In this study, lsGC and cGC were comparatively evaluated on their ability to capture neurologic damage associated with HIV-associated neurocognitive disorders (HAND). Functional brain network models were constructed from rs-fMRI data collected from a cohort of HIV+ and HIV- subjects. Graph theoretic properties of the resulting networks were then used to train a support vector machine (SVM) model to predict clinically relevant parameters, such as HIV status and neuropsychometric (NP) scores. For the HIV+/- classification task, lsGC, which yielded a peak area under the receiver operating characteristic curve (AUC) of 0.83, significantly outperformed cGC, which yielded a peak AUC of 0.61, at all parameter settings tested. For the NP score regression task, lsGC, with a minimum mean squared error (MSE) of 0.75, significantly outperformed cGC, with a minimum MSE of 0.84 (p < 0.001, one-tailed paired t-test). These results show that, at optimal parameter settings, lsGC is better able to capture functional brain connectivity correlates of HAND than cGC. However, given the substantial variation in the performance of the two methods at different parameter settings, particularly for the regression task, improved parameter selection criteria are necessary and constitute an area for future research.

  11. Automatically assessing properties of dynamic cameras for camera selection and rapid deployment of video content analysis tasks in large-scale ad-hoc networks

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; van Rest, Jeroen H. C.; ten Hove, Johan-Martijn; ter Haar, Frank B.; Burghouts, Gertjan J.

    2017-10-01

    Video analytics is essential for managing large quantities of raw data that are produced by video surveillance systems (VSS) for the prevention, repression and investigation of crime and terrorism. Analytics is highly sensitive to changes in the scene, and for changes in the optical chain so a VSS with analytics needs careful configuration and prompt maintenance to avoid false alarms. However, there is a trend from static VSS consisting of fixed CCTV cameras towards more dynamic VSS deployments over public/private multi-organization networks, consisting of a wider variety of visual sensors, including pan-tilt-zoom (PTZ) cameras, body-worn cameras and cameras on moving platforms. This trend will lead to more dynamic scenes and more frequent changes in the optical chain, creating structural problems for analytics. If these problems are not adequately addressed, analytics will not be able to continue to meet end users' developing needs. In this paper, we present a three-part solution for managing the performance of complex analytics deployments. The first part is a register containing meta data describing relevant properties of the optical chain, such as intrinsic and extrinsic calibration, and parameters of the scene such as lighting conditions or measures for scene complexity (e.g. number of people). A second part frequently assesses these parameters in the deployed VSS, stores changes in the register, and signals relevant changes in the setup to the VSS administrator. A third part uses the information in the register to dynamically configure analytics tasks based on VSS operator input. In order to support the feasibility of this solution, we give an overview of related state-of-the-art technologies for autocalibration (self-calibration), scene recognition and lighting estimation in relation to person detection. The presented solution allows for rapid and robust deployment of Video Content Analysis (VCA) tasks in large scale ad-hoc networks.

  12. Large-scale STI services in Avahan improve utilization and treatment seeking behaviour amongst high-risk groups in India: an analysis of clinical records from six states

    Directory of Open Access Journals (Sweden)

    Gurung Anup

    2011-12-01

    Full Text Available Abstract Background Avahan, the India AIDS Initiative, implemented a large HIV prevention programme across six high HIV prevalence states amongst high risk groups consisting of female sex workers, high risk men who have sex with men, transgenders and injecting drug users in India. Utilization of the clinical services, health seeking behaviour and trends in syndromic diagnosis of sexually transmitted infections amongst these populations were measured using the individual tracking data. Methods The Avahan clinical monitoring system included individual tracking data pertaining to clinical services amongst high risk groups. All clinic visits were recorded in the routine clinical monitoring system using unique identification numbers at the NGO-level. Visits by individual clinic attendees were tracked from January 2005 to December 2009. An analysis examining the limited variables over time, stratified by risk group, was performed. Results A total of 431,434 individuals including 331,533 female sex workers, 10,280 injecting drug users, 82,293 men who have sex with men, and 7,328 transgenders visited the clinics with a total of 2,700,192 visits. Individuals made an average of 6.2 visits to the clinics during the study period. The number of visits per person increased annually from 1.2 in 2005 to 8.3 in 2009. The proportion of attendees visiting clinics more than four times a year increased from 4% in 2005 to 26% in 2009 (p Conclusions The programme demonstrated that acceptable and accessible services with marginalised and often difficult–to-reach populations can be brought to a very large scale using standardized approaches. Utilization of these services can dramatically improve health seeking behaviour and reduce STI prevalence.

  13. Sensitivity of 2-[18F]fluoro-2-deoxyglucose positron emission tomography for advanced colorectal neoplasms: a large-scale analysis of 7505 asymptomatic screening individuals.

    Science.gov (United States)

    Sekiguchi, Masau; Kakugawa, Yasuo; Terauchi, Takashi; Matsumoto, Minori; Saito, Hiroshi; Muramatsu, Yukio; Saito, Yutaka; Matsuda, Takahisa

    2016-12-01

    The sensitivity of 2-[ 18 F]fluoro-2-deoxyglucose positron emission tomography (FDG-PET) for advanced colorectal neoplasms among healthy subjects is not yet fully understood. The present study aimed to clarify the sensitivity by analyzing large-scale data from an asymptomatic screening population. A total of 7505 asymptomatic screenees who underwent both FDG-PET and colonoscopy at our Cancer Screening Division between February 2004 and March 2013 were analyzed. FDG-PET and colonoscopy were performed on consecutive days, and each examination was interpreted in a blinded fashion. The results of the two examinations were compared for each of the divided six colonic segments, with those from colonoscopy being set as the reference. The relationships between the sensitivity of FDG-PET and clinicopathological features of advanced neoplasms were also evaluated. Two hundred ninety-one advanced neoplasms, including 24 invasive cancers, were detected in 262 individuals. Thirteen advanced neoplasms (advanced adenomas) were excluded from the analysis because of the coexistence of lesions in the same colonic segment. The sensitivity, specificity, and positive and negative predictive values of FDG-PET for advanced neoplasms were 16.9 % [95 % confidence interval (CI) 12.7-21.8 %], 99.3 % (95 % CI 99.2-99.4 %), 13.5 % (95 % CI 10.1-17.6 %), and 99.4 % (95 % CI 99.3-99.5 %), respectively. The sensitivity was lower for lesions with less advanced histological grade, of smaller size, and flat-type morphology, and for those located in the proximal part of the colon. FDG-PET is believed to be difficult to use as a primary screening tool in population-based colorectal cancer screening because of its low sensitivity for advanced neoplasms. Even when it is used in opportunistic cancer screening, the limit of its sensitivity should be considered.

  14. Performance of granular activated carbon to remove micropollutants from municipal wastewater-A meta-analysis of pilot- and large-scale studies.

    Science.gov (United States)

    Benstoem, Frank; Nahrstedt, Andreas; Boehler, Marc; Knopp, Gregor; Montag, David; Siegrist, Hansruedi; Pinnekamp, Johannes

    2017-10-01

    For reducing organic micropollutants (MP) in municipal wastewater effluents, granular activated carbon (GAC) has been tested in various studies. We did systematic literature research and found 44 studies dealing with the adsorption of MPs (carbamazepine, diclofenac, sulfamethoxazole) from municipal wastewater on GAC in pilot- and large-scale plants. Within our meta-analysis we plot the bed volumes (BV [m 3 water /m 3 GAC ]) until the breakthrough criterion of MP-BV20% was reached, dependent on potential relevant parameters (empty bed contact time EBCT, influent DOC DOC 0 and manufacturing method). Moreover, we performed statistical tests (ANOVAs) to check the results for significance. Single adsorbers operating time differs i.e. by 2500% until breakthrough of diclofenac-BV20% was reached (800-20,000 BV). There was still elimination of the "very well/well" adsorbable MPs such as carbamazepine and diclofenac even when the equilibrium of DOC had already been reached. No strong statistical significance of EBCT and DOC 0 on MP-BV20% could be found due to lack of data and the high heterogeneity of the studies using GAC of different qualities. In further studies, adsorbers should be operated ≫20,000 BV for exact calculation of breakthrough curves, and the following parameters should be recorded: selected MPs; DOC 0; UVA 254 ; EBCT; product name, manufacturing method and raw material of GAC; suspended solids (TSS); backwash interval; backwash program and pressure drop within adsorber. Based on our investigations we generally recommend using reactivated GAC to reduce the environmental impact and to carry out tests on pilot scale to collect reliable data for process design. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. How Did the Information Flow in the #AlphaGo Hashtag Network? A Social Network Analysis of the Large-Scale Information Network on Twitter.

    Science.gov (United States)

    Kim, Jinyoung

    2017-12-01

    As it becomes common for Internet users to use hashtags when posting and searching information on social media, it is important to understand who builds a hashtag network and how information is circulated within the network. This article focused on unlocking the potential of the #AlphaGo hashtag network by addressing the following questions. First, the current study examined whether traditional opinion leadership (i.e., the influentials hypothesis) or grassroot participation by the public (i.e., the interpersonal hypothesis) drove dissemination of information in the hashtag network. Second, several unique patterns of information distribution by key users were identified. Finally, the association between attributes of key users who exerted great influence on information distribution (i.e., the number of followers and follows) and their central status in the network was tested. To answer the proffered research questions, a social network analysis was conducted using a large-scale hashtag network data set from Twitter (n = 21,870). The results showed that the leading actors in the network were actively receiving information from their followers rather than serving as intermediaries between the original information sources and the public. Moreover, the leading actors played several roles (i.e., conversation starters, influencers, and active engagers) in the network. Furthermore, the number of their follows and followers were significantly associated with their central status in the hashtag network. Based on the results, the current research explained how the information was exchanged in the hashtag network by proposing the reciprocal model of information flow.

  16. How many hours do you usually work? An analysis of the working hours questions in 26 large-scale surveys in 6 countries and the European Union.

    NARCIS (Netherlands)

    Dragstra, A.; Tijdens, K.

    2004-01-01

    This paper reviews how working hours are asked in 26 large-scale surveys in 6 countries plus the European Union. Four dimensions of working time were investigated, notably number of working hours, timing of work, predictability and control over hours, and commuting time. Although almost all

  17. How many hours do you usually work? An analysis of the working hours questions in 26 large-scale surveys in six countries and the European Union

    NARCIS (Netherlands)

    Tijdens, K.; Dragstra, A.

    2007-01-01

    This article reviews how working hours are asked for in 26 large-scale surveys in six countries plus the European Union. Four dimensions of working time were investigated, notably number of working hours, timing of work, predictability and control over hours, and commuting time. Although almost all

  18. Analysis of the economic impact of large-scale deployment of biomass resources for energy and materials in the Netherlands : macro-economics biobased synthesis report

    NARCIS (Netherlands)

    Hoefnagels, R.; Dornburg, V.; Faaij, A.; Banse, M.A.H.

    2011-01-01

    The Bio-based Raw Materials Platform (PGG), part of the Energy Transition in The Netherlands, commissioned the Agricultural Economics Research Institute (LEI) and the Copernicus Institute of Utrecht University to conduct research on the macro-economic impact of large scale deployment of biomass for

  19. Development of multielement neutron-capture prompt γ-rays activation analysis method

    International Nuclear Information System (INIS)

    Liu Yuren; Xie Yali; Zhao Yunzhi; Liu Jiping; Meng Bonian

    1998-01-01

    The relationship between content of the measured elements and area of typical peaks of prompt γ-rays is presented. The root-mean square errors on both the regression value of instrumentation analysis and chemical analysis for some common elements are lower than 0.5wt%. Function of the slowing body was found and analysis sensitivity was enhanced obviously in the iron ore analysis. The FWHM of the spectrometer for the H prompt γ-ray peak (2.223 MeV) is 3 keV

  20. Large scale biomimetic membrane arrays

    DEFF Research Database (Denmark)

    Hansen, Jesper Søndergaard; Perry, Mark; Vogel, Jörg

    2009-01-01

    To establish planar biomimetic membranes across large scale partition aperture arrays, we created a disposable single-use horizontal chamber design that supports combined optical-electrical measurements. Functional lipid bilayers could easily and efficiently be established across CO2 laser micro......-structured 8 x 8 aperture partition arrays with average aperture diameters of 301 +/- 5 mu m. We addressed the electro-physical properties of the lipid bilayers established across the micro-structured scaffold arrays by controllable reconstitution of biotechnological and physiological relevant membrane...... peptides and proteins. Next, we tested the scalability of the biomimetic membrane design by establishing lipid bilayers in rectangular 24 x 24 and hexagonal 24 x 27 aperture arrays, respectively. The results presented show that the design is suitable for further developments of sensitive biosensor assays...

  1. Conference on Large Scale Optimization

    CERN Document Server

    Hearn, D; Pardalos, P

    1994-01-01

    On February 15-17, 1993, a conference on Large Scale Optimization, hosted by the Center for Applied Optimization, was held at the University of Florida. The con­ ference was supported by the National Science Foundation, the U. S. Army Research Office, and the University of Florida, with endorsements from SIAM, MPS, ORSA and IMACS. Forty one invited speakers presented papers on mathematical program­ ming and optimal control topics with an emphasis on algorithm development, real world applications and numerical results. Participants from Canada, Japan, Sweden, The Netherlands, Germany, Belgium, Greece, and Denmark gave the meeting an important international component. At­ tendees also included representatives from IBM, American Airlines, US Air, United Parcel Serice, AT & T Bell Labs, Thinking Machines, Army High Performance Com­ puting Research Center, and Argonne National Laboratory. In addition, the NSF sponsored attendance of thirteen graduate students from universities in the United States and abro...

  2. Large scale nuclear structure studies

    International Nuclear Information System (INIS)

    Faessler, A.

    1985-01-01

    Results of large scale nuclear structure studies are reported. The starting point is the Hartree-Fock-Bogoliubov solution with angular momentum and proton and neutron number projection after variation. This model for number and spin projected two-quasiparticle excitations with realistic forces yields in sd-shell nuclei similar good results as the 'exact' shell-model calculations. Here the authors present results for a pf-shell nucleus 46 Ti and results for the A=130 mass region where they studied 58 different nuclei with the same single-particle energies and the same effective force derived from a meson exchange potential. They carried out a Hartree-Fock-Bogoliubov variation after mean field projection in realistic model spaces. In this way, they determine for each yrast state the optimal mean Hartree-Fock-Bogoliubov field. They apply this method to 130 Ce and 128 Ba using the same effective nucleon-nucleon interaction. (Auth.)

  3. Large-scale river regulation

    International Nuclear Information System (INIS)

    Petts, G.

    1994-01-01

    Recent concern over human impacts on the environment has tended to focus on climatic change, desertification, destruction of tropical rain forests, and pollution. Yet large-scale water projects such as dams, reservoirs, and inter-basin transfers are among the most dramatic and extensive ways in which our environment has been, and continues to be, transformed by human action. Water running to the sea is perceived as a lost resource, floods are viewed as major hazards, and wetlands are seen as wastelands. River regulation, involving the redistribution of water in time and space, is a key concept in socio-economic development. To achieve water and food security, to develop drylands, and to prevent desertification and drought are primary aims for many countries. A second key concept is ecological sustainability. Yet the ecology of rivers and their floodplains is dependent on the natural hydrological regime, and its related biochemical and geomorphological dynamics. (Author)

  4. A large-scale genetic analysis reveals a strong contribution of the HLA class II region to giant cell arteritis susceptibility.

    Science.gov (United States)

    Carmona, F David; Mackie, Sarah L; Martín, Jose-Ezequiel; Taylor, John C; Vaglio, Augusto; Eyre, Stephen; Bossini-Castillo, Lara; Castañeda, Santos; Cid, Maria C; Hernández-Rodríguez, José; Prieto-González, Sergio; Solans, Roser; Ramentol-Sintas, Marc; González-Escribano, M Francisca; Ortiz-Fernández, Lourdes; Morado, Inmaculada C; Narváez, Javier; Miranda-Filloy, José A; Beretta, Lorenzo; Lunardi, Claudio; Cimmino, Marco A; Gianfreda, Davide; Santilli, Daniele; Ramirez, Giuseppe A; Soriano, Alessandra; Muratore, Francesco; Pazzola, Giulia; Addimanda, Olga; Wijmenga, Cisca; Witte, Torsten; Schirmer, Jan H; Moosig, Frank; Schönau, Verena; Franke, Andre; Palm, Øyvind; Molberg, Øyvind; Diamantopoulos, Andreas P; Carette, Simon; Cuthbertson, David; Forbess, Lindsy J; Hoffman, Gary S; Khalidi, Nader A; Koening, Curry L; Langford, Carol A; McAlear, Carol A; Moreland, Larry; Monach, Paul A; Pagnoux, Christian; Seo, Philip; Spiera, Robert; Sreih, Antoine G; Warrington, Kenneth J; Ytterberg, Steven R; Gregersen, Peter K; Pease, Colin T; Gough, Andrew; Green, Michael; Hordon, Lesley; Jarrett, Stephen; Watts, Richard; Levy, Sarah; Patel, Yusuf; Kamath, Sanjeet; Dasgupta, Bhaskar; Worthington, Jane; Koeleman, Bobby P C; de Bakker, Paul I W; Barrett, Jennifer H; Salvarani, Carlo; Merkel, Peter A; González-Gay, Miguel A; Morgan, Ann W; Martín, Javier

    2015-04-02

    We conducted a large-scale genetic analysis on giant cell arteritis (GCA), a polygenic immune-mediated vasculitis. A case-control cohort, comprising 1,651 case subjects with GCA and 15,306 unrelated control subjects from six different countries of European ancestry, was genotyped by the Immunochip array. We also imputed HLA data with a previously validated imputation method to perform a more comprehensive analysis of this genomic region. The strongest association signals were observed in the HLA region, with rs477515 representing the highest peak (p = 4.05 × 10(-40), OR = 1.73). A multivariate model including class II amino acids of HLA-DRβ1 and HLA-DQα1 and one class I amino acid of HLA-B explained most of the HLA association with GCA, consistent with previously reported associations of classical HLA alleles like HLA-DRB1(∗)04. An omnibus test on polymorphic amino acid positions highlighted DRβ1 13 (p = 4.08 × 10(-43)) and HLA-DQα1 47 (p = 4.02 × 10(-46)), 56, and 76 (both p = 1.84 × 10(-45)) as relevant positions for disease susceptibility. Outside the HLA region, the most significant loci included PTPN22 (rs2476601, p = 1.73 × 10(-6), OR = 1.38), LRRC32 (rs10160518, p = 4.39 × 10(-6), OR = 1.20), and REL (rs115674477, p = 1.10 × 10(-5), OR = 1.63). Our study provides evidence of a strong contribution of HLA class I and II molecules to susceptibility to GCA. In the non-HLA region, we confirmed a key role for the functional PTPN22 rs2476601 variant and proposed other putative risk loci for GCA involved in Th1, Th17, and Treg cell function. Copyright © 2015 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  5. Multi-element neutron activation analysis of biological tissues: contribution to the study of trace element accumulation as a function of aging

    International Nuclear Information System (INIS)

    Gaudry, Andre.

    1975-01-01

    The accumulation of trace elements in various organs as a function of age was studied in rats, in connection with tissue aging phenomena. Part one reviews the various methods available to develop a programme of simultaneous multi-element analysis in biological matrices. Part two studies the precision and accuracy offered by neutron activation analysis. Special attention is paid to the problem of sample contamination by the silica glass irradiation supports. The possible causes of this effect are mentioned and a procedure limiting its harmful influence is proposed. Part three defines the restrictions introduced by the use of a method to separate the activable matrix. The fourth and last chapter describes the development of a multielement chemical separation system, designed to work semi-automatically for the simultaneous treatment of three samples and a standard in a shielded cell of small dimensions. The principles of a multi-comparator calibration where a knowledge of certain conventional but imprecise nuclear data is unnecessary owing to an experimental expedient are outlined briefly. Finally the separation method is tried out on various biological samples, including a reference (bovine liver SRM1577-NBS), and some results are given [fr

  6. Multi-element analysis, bioavailability and fractionation of herbal tea products

    International Nuclear Information System (INIS)

    Szymczycha-Madeja, Anna; Welna, Maja; Zyrnicki, Wieslaw

    2013-01-01

    Herbal teas (Mentha piperitae foliumand mixture Marticaria chamomilla flos with Lavandula officinalis flos) were compared considering the total contents of micro (Al, Ba, Cd, Cr, Cu, Fe, Mn, Ni, Pb, Sr, Ti, V) and macro (C, H, N, S, Ca, Mg, P) elements, bioavailability and fractionation. Different methods (inductively coupled plasma optical emission spectrometry (ICP OES), Fourier transform infrared spectroscopy (FTIR) and CHNS elemental analysis) were applied. The microwave-assisted digestion procedure was found to be more effective than the hot-plate heating for the wet acid digestion of tea. The application of the modified BCR (Community Bureau of Reference) sequential extraction procedure exhibited differences in the concentrations of metal bound to reducible and oxidizable fractions. The accuracy of method was verified by analysis of certified reference material INCT-TL -1 Tea Leaves. The daily intake of all elements from the analyzed herbal tea infusion did not exceed the maximum permissible levels and does not constitute health risk. (author)

  7. Multi-element analysis, bioavailability and fractionation of herbal tea products

    Energy Technology Data Exchange (ETDEWEB)

    Szymczycha-Madeja, Anna; Welna, Maja; Zyrnicki, Wieslaw, E-mail: anna.szymczycha@pwr.wroc.pl [Wroclaw University of Technology, Chemistry Department, Analytical Chemistry Division, Wroclaw (Poland)

    2013-05-15

    Herbal teas (Mentha piperitae foliumand mixture Marticaria chamomilla flos with Lavandula officinalis flos) were compared considering the total contents of micro (Al, Ba, Cd, Cr, Cu, Fe, Mn, Ni, Pb, Sr, Ti, V) and macro (C, H, N, S, Ca, Mg, P) elements, bioavailability and fractionation. Different methods (inductively coupled plasma optical emission spectrometry (ICP OES), Fourier transform infrared spectroscopy (FTIR) and CHNS elemental analysis) were applied. The microwave-assisted digestion procedure was found to be more effective than the hot-plate heating for the wet acid digestion of tea. The application of the modified BCR (Community Bureau of Reference) sequential extraction procedure exhibited differences in the concentrations of metal bound to reducible and oxidizable fractions. The accuracy of method was verified by analysis of certified reference material INCT-TL{sup -1} Tea Leaves. The daily intake of all elements from the analyzed herbal tea infusion did not exceed the maximum permissible levels and does not constitute health risk. (author)

  8. Large scale 20mm photography for range resources analysis in the Western United States. [Casa Grande, Arizona, Mercury, Nevada, and Mojave Desert

    Science.gov (United States)

    Tueller, P. T.

    1977-01-01

    Large scale 70mm aerial photography is a valuable supplementary tool for rangeland studies. A wide assortment of applications were developed varying from vegetation mapping to assessing environmental impact on rangelands. Color and color infrared stereo pairs are useful for effectively sampling sites limited by ground accessibility. They allow an increased sample size at similar or lower cost than ground sampling techniques and provide a permanent record.

  9. Applications of total reflection X-ray fluorescence in multi-element analysis

    International Nuclear Information System (INIS)

    Michaelis, W.; Prange, A.; Knoth, J.

    1985-01-01

    Although Total Reflection X-Ray Fluorescence Analysis (TXRF) became available for practical applications and routine measurements only few years ago, the number of programmes that make use of this method is increasing rapidly. The scope of work is widespread over environmental research and monitoring, mineralogy, mineral exploration, oceanography, biology, medicine and biochemistry. The present paper gives a brief survey of these applications and summarizes some of them which are typical for quite different matrices. (orig.)

  10. Multielement analysis by neutron activation of tissues from swine administered copper supplemented diets

    International Nuclear Information System (INIS)

    Stroube, W.B. Jr.; Cunningham, W.C.; Tanner, J.T.; Bradley, B.D.; Graber, G.

    1982-01-01

    Instrumental neutron activation analysis was used to determine Co, Cu, Fe, Mg, Mn, Se and Zn in tissues from swine fed copper supplemented diets. Elemental abundances of the seven elements in the kidney tissues are all within normal ranges. No trends are observed between the groups of animals which received different levels of dietary copper. Dietary copper values of 70 to 90 ppm increase liver copper abundance for certain animals. (author)

  11. Large-scale proteome analysis of abscisic acid and ABSCISIC ACID INSENSITIVE3-dependent proteins related to desiccation tolerance in Physcomitrella patens

    International Nuclear Information System (INIS)

    Yotsui, Izumi; Serada, Satoshi; Naka, Tetsuji; Saruhashi, Masashi; Taji, Teruaki; Hayashi, Takahisa; Quatrano, Ralph S.; Sakata, Yoichi

    2016-01-01

    tolerance might have evolved in ancestral land plants before the separation of bryophytes and vascular plants. - Highlights: • Large-scale proteomics highlighted proteins related to plant desiccation tolerance. • The proteins were regulated by both the phytohormone ABA and ABI3. • The proteins accumulated in desiccation tolerant cells of both Arabidopsis and moss. • Evolutionary origin of regulatory machinery for desiccation tolerance is proposed.

  12. Large-scale proteome analysis of abscisic acid and ABSCISIC ACID INSENSITIVE3-dependent proteins related to desiccation tolerance in Physcomitrella patens

    Energy Technology Data Exchange (ETDEWEB)

    Yotsui, Izumi, E-mail: izumi.yotsui@riken.jp [Department of BioScience, Tokyo University of Agriculture 1-1-1 Sakuragaoka, Setagayaku, Tokyo, 156-8502 (Japan); Serada, Satoshi, E-mail: serada@nibiohn.go.jp [Laboratory of Immune Signal, National Institute of Biomedical Innovation, Health and Nutrition, 7-6-8 Saito-Asagi, Ibaraki, Osaka, 567-0085 (Japan); Naka, Tetsuji, E-mail: tnaka@nibiohn.go.jp [Laboratory of Immune Signal, National Institute of Biomedical Innovation, Health and Nutrition, 7-6-8 Saito-Asagi, Ibaraki, Osaka, 567-0085 (Japan); Saruhashi, Masashi, E-mail: s13db001@mail.saitama-u.ac.jp [Department of BioScience, Tokyo University of Agriculture 1-1-1 Sakuragaoka, Setagayaku, Tokyo, 156-8502 (Japan); Taji, Teruaki, E-mail: t3teruak@nodai.ac.jp [Department of BioScience, Tokyo University of Agriculture 1-1-1 Sakuragaoka, Setagayaku, Tokyo, 156-8502 (Japan); Hayashi, Takahisa, E-mail: t4hayash@nodai.ac.jp [Department of BioScience, Tokyo University of Agriculture 1-1-1 Sakuragaoka, Setagayaku, Tokyo, 156-8502 (Japan); Quatrano, Ralph S., E-mail: rsq@wustl.edu [Department of Biology, Washington University in St. Louis, St. Louis, MO, 63130-4899 (United States); Sakata, Yoichi, E-mail: sakata@nodai.ac.jp [Department of BioScience, Tokyo University of Agriculture 1-1-1 Sakuragaoka, Setagayaku, Tokyo, 156-8502 (Japan)

    2016-03-18

    tolerance might have evolved in ancestral land plants before the separation of bryophytes and vascular plants. - Highlights: • Large-scale proteomics highlighted proteins related to plant desiccation tolerance. • The proteins were regulated by both the phytohormone ABA and ABI3. • The proteins accumulated in desiccation tolerant cells of both Arabidopsis and moss. • Evolutionary origin of regulatory machinery for desiccation tolerance is proposed.

  13. Analysis and experimental study on formation conditions of large-scale barrier-free diffuse atmospheric pressure air plasmas in repetitive pulse mode

    Science.gov (United States)

    Li, Lee; Liu, Lun; Liu, Yun-Long; Bin, Yu; Ge, Ya-Feng; Lin, Fo-Chang

    2014-01-01

    Atmospheric air diffuse plasmas have enormous application potential in various fields of science and technology. Without dielectric barrier, generating large-scale air diffuse plasmas is always a challenging issue. This paper discusses and analyses the formation mechanism of cold homogenous plasma. It is proposed that generating stable diffuse atmospheric plasmas in open air should meet the three conditions: high transient power with low average power, excitation in low average E-field with locally high E-field region, and multiple overlapping electron avalanches. Accordingly, an experimental configuration of generating large-scale barrier-free diffuse air plasmas is designed. Based on runaway electron theory, a low duty-ratio, high voltage repetitive nanosecond pulse generator is chosen as a discharge excitation source. Using the wire-electrodes with small curvature radius, the gaps with highly non-uniform E-field are structured. Experimental results show that the volume-scaleable, barrier-free, homogeneous air non-thermal plasmas have been obtained between the gap spacing with the copper-wire electrodes. The area of air cold plasmas has been up to hundreds of square centimeters. The proposed formation conditions of large-scale barrier-free diffuse air plasmas are proved to be reasonable and feasible.

  14. Multi-element analysis of wheat flour and white bread by neutron activation

    International Nuclear Information System (INIS)

    Godinez A, M.A.

    1994-01-01

    One of the best source of feeding even for the human being as for animals are the Cereals. Although they are mainly energetic aliment, due to its composition in starch, they are a very important source of proteins and amino acids. They contribute mineral elements to the diet. Even those elements constitute a very small part of the total diet, they take a very important place in many human metabolic processes. To make a multielemental analysis of an aliment is very important that we are based on a very sensible analytic technique so we are able to find them, just as the Neutronic Activation. This Nuclear technique allows you to make a qualitative and quantitative analysis of the elements that are in a sample, but it does n't show the way in which the elements are presented. It is based in turning those elements into radioactive ones through its exposition to an uniform and constant fluid of neutrons, so then its radioactivity can be determined. The present work has as a main purpose to make a multielemental analysis of the wheat flour and white bread through the Neutronic Activation Technique, using the comparator method and establishing previously the most appropriate work conditions as much irradiation as digestion and measuring of the radioactivity of the sample. In this way, it was able to know that the wheat flour has potassium, chlorine, magnesium, sodium, iron, zinc, manganese, rubidium and selenium elements in a concentration of 2000, 700, 500, 25, 18, 13, 5.5, 0.9 and 0.01 - 0.3 mg/g respectively. In an other hand it was found that the white bread has the same elements than the wheat flour but its concentration was: 1700, 9000, 400, 7000, 52, 13, 6, 1 and 0.05 - 0.3 mg/g respectively. (Author)

  15. Reviving large-scale projects

    International Nuclear Information System (INIS)

    Desiront, A.

    2003-01-01

    For the past decade, most large-scale hydro development projects in northern Quebec have been put on hold due to land disputes with First Nations. Hydroelectric projects have recently been revived following an agreement signed with Aboriginal communities in the province who recognized the need to find new sources of revenue for future generations. Many Cree are working on the project to harness the waters of the Eastmain River located in the middle of their territory. The work involves building an 890 foot long dam, 30 dikes enclosing a 603 square-km reservoir, a spillway, and a power house with 3 generating units with a total capacity of 480 MW of power for start-up in 2007. The project will require the use of 2,400 workers in total. The Cree Construction and Development Company is working on relations between Quebec's 14,000 Crees and the James Bay Energy Corporation, the subsidiary of Hydro-Quebec which is developing the project. Approximately 10 per cent of the $735-million project has been designated for the environmental component. Inspectors ensure that the project complies fully with environmental protection guidelines. Total development costs for Eastmain-1 are in the order of $2 billion of which $735 million will cover work on site and the remainder will cover generating units, transportation and financial charges. Under the treaty known as the Peace of the Braves, signed in February 2002, the Quebec government and Hydro-Quebec will pay the Cree $70 million annually for 50 years for the right to exploit hydro, mining and forest resources within their territory. The project comes at a time when electricity export volumes to the New England states are down due to growth in Quebec's domestic demand. Hydropower is a renewable and non-polluting source of energy that is one of the most acceptable forms of energy where the Kyoto Protocol is concerned. It was emphasized that large-scale hydro-electric projects are needed to provide sufficient energy to meet both

  16. Aspects of cleaning environmental materials for multi-element analysis, e.g. plant samples

    International Nuclear Information System (INIS)

    Markert, B.

    1992-01-01

    Cleaning of samples is often the first step in the entire procedure of sample preparation in environmental trace element research. The question must generally be raised of whether cleaning is meaningful before chemical investigations with plant material (e.g. for the determination of transfer factors in the soil/plant system) or not (e.g. for food chain analysis in the plant/animal system). The most varied cleaning procedures for plant samples are currently available ranging from dry and wet wiping of the leaf or needle surface up to the complete removal of the cuticule with the aid of chlorofom. There is at present no standardized cleaning procedure for plant samples so that it is frequently not possible to compare analytical data from different working groups studying the same plant species. (orig.)

  17. Synthetic multielement standards used for instrumental neutron activation analysis as rock imitations

    International Nuclear Information System (INIS)

    Leypunskaya, D.I.; Drynkin, V.I.; Belenky, B.V.; Kolomijtsev, M.A.; Dundera, V.Yu.; Pachulia, N.V.

    1975-01-01

    Complex (multielemental) standards representing microelement composition of standard rocks such as trap ST-1 (USSR), gabbrodiorite SGD-1 (USSR), albitized granite SG-1 (USSR), basalt BCR-1 (USA) and granodiorite GSP-1 (USA) have been synthesized. It has been shown that the concentration of each microelement in the synthetic standards can be given with a high precision. Comparative investigation has been carried out of the synthetic imitations and the above natural standard rocks. It has been found that the result of the instrumental neutron activation analysis using the synthetic standards is as good as in the case when natural standard rocks are used. The results obtained have been also used for substantiation of the versatility of the method used for standard preparation, i.e. a generalization has been made of a possibility of using this method for the preparation of synthetic standards representing the microelement composition of any natural rocks with various compositions and concentrations of microelements. (T.G.)

  18. A computer program to evaluate the experimental data in instrumental multielement neutron activation analysis

    International Nuclear Information System (INIS)

    Greim, L.; Motamedi, K.; Niedergesaess, R.

    1976-01-01

    A computer code evaluating experimental data of neutron activation analysis (NAA) for determination of atomic abundancies is described. The experimental data are, beside a probe designation, the probe weight, irradiation parameters and a Ge(Li)-pulse-height-spectrum from the activity measurement. The organisation of the necessary nuclear data, comprising all methods of activation in reactor-irradiations, is given. Furthermore the automatic evaluation of spectra, the designation of the resulting peaks to nuclei and the calculation of atomic abundancies are described. The complete evaluation of a spectrum with many lines, e.g. 100 lines of 20 nuclei, takes less than 1 minute machine-time on the TR 440 computer. (orig.) [de

  19. Tracking transformation processes of organic micropollutants in aquatic environments using multi-element isotope fractionation analysis

    International Nuclear Information System (INIS)

    Hofstetter, Thomas B.; Bolotin, Jakov; Skarpeli-Liati, Marita; Wijker, Reto; Kurt, Zohre; Nishino, Shirley F.; Spain, Jim C.

    2011-01-01

    The quantitative description of enzymatic or abiotic transformations of man-made organic micropollutants in rivers, lakes, and groundwaters is one of the major challenges associated with the risk assessment of water resource contamination. Compound-specific isotope analysis enables one to identify (bio)degradation pathways based on changes in the contaminants' stable isotope ratios even if multiple reactive and non-reactive processes cause concentrations to decrease. Here, we investigated how the magnitude and variability of isotope fractionation in some priority pollutants is determined by the kinetics and mechanisms of important enzymatic and abiotic redox reactions. For nitroaromatic compounds and substituted anilines, we illustrate that competing transformation pathways can be assessed via trends of N and C isotope signatures.

  20. Large Scale Glazed Concrete Panels

    DEFF Research Database (Denmark)

    Bache, Anja Margrethe

    2010-01-01

    Today, there is a lot of focus on concrete surface’s aesthitic potential, both globally and locally. World famous architects such as Herzog De Meuron, Zaha Hadid, Richard Meyer and David Chippenfield challenge the exposure of concrete in their architecture. At home, this trend can be seen...... in the crinkly façade of DR-Byen (the domicile of the Danish Broadcasting Company) by architect Jean Nouvel and Zaha Hadid’s Ordrupgård’s black curved smooth concrete surfaces. Furthermore, one can point to initiatives such as “Synlig beton” (visible concrete) that can be seen on the website www.......synligbeton.dk and spæncom’s aesthetic relief effects by the designer Line Kramhøft (www.spaencom.com). It is my hope that the research-development project “Lasting large scale glazed concrete formwork,” I am working on at DTU, department of Architectural Engineering will be able to complement these. It is a project where I...

  1. Large scale cluster computing workshop

    International Nuclear Information System (INIS)

    Dane Skow; Alan Silverman

    2002-01-01

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community

  2. Large-scale pool fires

    Directory of Open Access Journals (Sweden)

    Steinhaus Thomas

    2007-01-01

    Full Text Available A review of research into the burning behavior of large pool fires and fuel spill fires is presented. The features which distinguish such fires from smaller pool fires are mainly associated with the fire dynamics at low source Froude numbers and the radiative interaction with the fire source. In hydrocarbon fires, higher soot levels at increased diameters result in radiation blockage effects around the perimeter of large fire plumes; this yields lower emissive powers and a drastic reduction in the radiative loss fraction; whilst there are simplifying factors with these phenomena, arising from the fact that soot yield can saturate, there are other complications deriving from the intermittency of the behavior, with luminous regions of efficient combustion appearing randomly in the outer surface of the fire according the turbulent fluctuations in the fire plume. Knowledge of the fluid flow instabilities, which lead to the formation of large eddies, is also key to understanding the behavior of large-scale fires. Here modeling tools can be effectively exploited in order to investigate the fluid flow phenomena, including RANS- and LES-based computational fluid dynamics codes. The latter are well-suited to representation of the turbulent motions, but a number of challenges remain with their practical application. Massively-parallel computational resources are likely to be necessary in order to be able to adequately address the complex coupled phenomena to the level of detail that is necessary.

  3. Multielement analysis of Zanthoxylum bungeanum Maxim. essential oil using ICP-MS/MS.

    Science.gov (United States)

    Fu, Liang; Xie, Hualin; Shi, Shuyun

    2018-04-12

    The concentrations of trace elements (Cr, Ni, As, Cd, Hg, and Pb) in Zanthoxylum bungeanum Maxim. essential oil (ZBMEO) were determined by inductively coupled plasma tandem mass spectrometry. The ZBMEO sample was directly analyzed after simple dilution with n-hexane. Aiming for a relatively high vapor pressure of n-hexane and its resultant loading on plasma, we used a narrow injector torch and optimized plasma radio frequency power and carrier gas flow to ensure stable operation of the plasma. An optional gas flow of 20% O 2 in Ar was added to the carrier gas to prevent the incomplete combustion of highly concentrated organic carbon in plasma and the deposition of carbon on the sampling and skimmer cone orifices. In tandem mass spectrometry mode, O 2 was added to the collision/reaction cell to eliminate the interferences. The limits of detection for Cr, Ni, As, Cd, Hg, and Pb were 2.26, 1.64, 2.02, 1.35, 1.76, and 0.97 ng L -1 , respectively. After determination of 23 ZBMEO samples from different regions in China, we found that the average concentration ranges of trace elements in the 23 ZBMEO samples were 0.72-6.02 ng g -1 , 0.09-2.87 ng g -1 , 0.21-5.84 ng g -1 , 0.16-2.15 ng g -1 , 0.13-0.92 ng g -1 , and 0.17-0.73 ng g -1 for Cr, Ni, As, Cd, Hg, and Pb, respectively. The trace elements in ZBMEO differed significantly when different extraction technologies were used. The study revealed that the contents of the toxic elements As, Cd, Hg, and Pb were extremely low, and hence they are unlikely to pose a health risk following ZBMEO ingestion. Graphical abstract The working mechanism of sample analysis by ICP-MS/MS.

  4. A map of the cosmic microwave background radiation from the Wilkinson Microwave Anisotropy Probe (WMAP), showing the large-scale fluctuations (the quadrupole and octopole) isolated by an analysis done partly by theorists at CERN.

    CERN Multimedia

    2004-01-01

    A recent analysis, in part by theorists working at CERN, suggests a new view of the cosmic microwave background radiation. It seems the solar system, rather than the universe, causes the radiation's large-scale fluctuations, similar to the bass in a song.

  5. Wear And Tear Determination By Trace Multi-Element Analysis Of An Unused And Used Lubricant Oil Using Instrumental Neutron Activation Analysis

    International Nuclear Information System (INIS)

    Adeyemo, D.J.

    2004-01-01

    Instrumental neutron activation analysis technique using the Imperial College Centre consort M. K. II nuclear reactor was utilized for the determination of: As, Ba Ca CI, Co, Cr, Cu, Fe, K, Mn, Mo, Na Ni, Rb, Sb, Se, Ti, V, and Zn, in an imported and used car lubricant oil. The wear of the oil lubricated parts of the car engine was monitored by establishing a correlation between the results obtained in the analysis. The result obtained from the analysis of the unused and then used samples on a low performing oil leaking four stroke car engine showed an increase on all the elements determined except for Se and Rb. The precision for the multi-element analysis is less than 12% for most of the elements. The accuracy of the measurement is also validated by the result obtained from the analysis of NBS-SRM1635 (sub-bituminous) coal standard for the elements. The results obtained indicate that analysis of unused and used lubricant oil samples can aid in locating defects in engine parts and hence facilitated maintenance procedures

  6. A computer programme for use in the development of multi-element x-ray-fluorescence methods of analysis

    International Nuclear Information System (INIS)

    Wall, G.J.

    1985-01-01

    A computer programme (written in BASIC) is described for the evaluation of spectral-line intensities in X-ray-fluorescence spectrometry. The programme is designed to assist the analyst while he is developing new analytical methods, because it facilitates the selection of the following evaluation parameters: calculation models, spectral-line correction factors, calibration curves, calibration ranges, and point deletions. In addition, the programme enables the analyst to undertake routine calculations of data from multi-element analyses in which variable data-reduction parameters are used for each element

  7. Non-stationary analysis of the frequency and intensity of heavy precipitation over Canada and their relations to large-scale climate patterns

    Science.gov (United States)

    Tan, Xuezhi; Gan, Thian Yew

    2017-05-01

    In recent years, because the frequency and severity of floods have increased across Canada, it is important to understand the characteristics of Canadian heavy precipitation. Long-term precipitation data of 463 gauging stations of Canada were analyzed using non-stationary generalized extreme value distribution (GEV), Poisson distribution and generalized Pareto (GP) distribution. Time-varying covariates that represent large-scale climate patterns such as El Niño Southern Oscillation (ENSO), North Atlantic Oscillation (NAO), Pacific decadal oscillation (PDO) and North Pacific Oscillation (NP) were incorporated to parameters of GEV, Poisson and GP distributions. Results show that GEV distributions tend to under-estimate annual maximum daily precipitation (AMP) of western and eastern coastal regions of Canada, compared to GP distributions. Poisson regressions show that temporal clusters of heavy precipitation events in Canada are related to large-scale climate patterns. By modeling AMP time series with non-stationary GEV and heavy precipitation with non-stationary GP distributions, it is evident that AMP and heavy precipitation of Canada show strong non-stationarities (abrupt and slowly varying changes) likely because of the influence of large-scale climate patterns. AMP in southwestern coastal regions, southern Canadian Prairies and the Great Lakes tend to be higher in El Niño than in La Niña years, while AMP of other regions of Canada tends to be lower in El Niño than in La Niña years. The influence of ENSO on heavy precipitation was spatially consistent but stronger than on AMP. The effect of PDO, NAO and NP on extreme precipitation is also statistically significant at some stations across Canada.

  8. Large-scale galaxy bias

    Science.gov (United States)

    Desjacques, Vincent; Jeong, Donghui; Schmidt, Fabian

    2018-02-01

    This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy statistics. We then review the excursion-set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  9. Large-scale galaxy bias

    Science.gov (United States)

    Jeong, Donghui; Desjacques, Vincent; Schmidt, Fabian

    2018-01-01

    Here, we briefly introduce the key results of the recent review (arXiv:1611.09787), whose abstract is as following. This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy (or halo) statistics. We then review the excursion set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  10. How many hours do you usually work? An analysis of the working hours questions in 26 large-scale surveys in six countries and the European Union

    OpenAIRE

    Tijdens, K.; Dragstra, A.

    2007-01-01

    This article reviews how working hours are asked for in 26 large-scale surveys in six countries plus the European Union. Four dimensions of working time were investigated, notably number of working hours, timing of work, predictability and control over hours, and commuting time. Although almost all questionnaires ask for hours worked, the terminology varies greatly. In only half of the cases a reference period is taken into account and in half the reasons for working more/less in the survey w...

  11. How many hours do you usually work? An analysis of the working hours questions in 26 large-scale surveys in 6 countries and the European Union.

    OpenAIRE

    Dragstra, A.; Tijdens, K.

    2004-01-01

    This paper reviews how working hours are asked in 26 large-scale surveys in 6 countries plus the European Union. Four dimensions of working time were investigated, notably number of working hours, timing of work, predictability and control over hours, and commuting time. Although almost all questionnaires ask for hours worked, the terminology varies largely. In only half of the cases a reference period is taken into account and in half the reasons for working more/less in the survey week than...

  12. Concepts for Large Scale Hydrogen Production

    OpenAIRE

    Jakobsen, Daniel; Åtland, Vegar

    2016-01-01

    The objective of this thesis is to perform a techno-economic analysis of large-scale, carbon-lean hydrogen production in Norway, in order to evaluate various production methods and estimate a breakeven price level. Norway possesses vast energy resources and the export of oil and gas is vital to the country s economy. The results of this thesis indicate that hydrogen represents a viable, carbon-lean opportunity to utilize these resources, which can prove key in the future of Norwegian energy e...

  13. Convective and large-scale mass flux profiles over tropical oceans determined from synergistic analysis of a suite of satellite observations

    Science.gov (United States)

    Masunaga, Hirohiko; Luo, Zhengzhao Johnny

    2016-07-01

    A new, satellite-based methodology is developed to evaluate convective mass flux and large-scale total mass flux. To derive the convective mass flux, candidate profiles of in-cloud vertical velocity are first constructed with a simple plume model under the constraint of ambient sounding and then narrowed down to the solution that matches satellite-derived cloud top buoyancy. Meanwhile, the large-scale total mass flux is provided separately from satellite soundings by a method developed previously. All satellite snapshots are sorted into a composite time series that delineates the evolution of a vigorous and organized convective system. Principal findings are the following. First, convective mass flux is modulated primarily by convective cloud cover, with the intensity of individual convection being less variable over time. Second, convective mass flux dominates the total mass flux only during the early hours of the convective evolution; as convective system matures, a residual mass flux builds up in the mass flux balance that is reminiscent of stratiform dynamics. The method developed in this study is expected to be of unique utility for future observational diagnosis of tropical convective dynamics and for evaluation of global climate model cumulus parameterizations in a global sense.

  14. Element availability of bivalve with symbiotic zooxanthellae in coral sea area as studied by multielement profiling analysis

    Science.gov (United States)

    Itoh, A.; Kabe, N.

    2008-12-01

    In coral sea, a characteristic ecosystem is formed by many kinds of marine animals and plants, although seawater is uneutrophic. This may be explained by the fact that various chemical species with bioessentiality are effectively taken and used by lower animals and plants in coral sea area. A symbiotic relationship often found among different animals and plants in this area is considered to be working as one of such processes. However, the specific bioavailability of the elements for the marine animals and plants in coral reef area has not been studied from the viewpoints of trace and ultratrace elements. It is found by the present authors that bivalve with symbiotic zooxanthellae (Tridacna crocea) living on coral reef had relatively higher bio- accumulation factors for many bio-essential elements than other kinds of bivalves, although they live in the uneutrophic sea area. The present authors focused on Tridacna crocea as one of the symbiotic animals. Thus, in the present study, at first, multielement determination of major-to-ultratrace elements (about 20 elements) in each organ of Tridacna crocea with symbiotic zooxanthellae, were carried out by ICP-AES, ICP- MS, and CHN coder. At Second, the specific bioavailability of trace and ultratrace elements in Tridacna crocea was discussed on the multielement data for seawater, seaweeds, and other bivalves in coral sea area.

  15. Analysis of the Economic Impact of Large-Scale Deployment of Biomass Resources for Energy and Materials in the Netherlands. Macro-economics biobased synthesis report

    International Nuclear Information System (INIS)

    Hoefnagels, R.; Dornburg, V.; Faaij, A.; Banse, M.

    2009-03-01

    The Bio-based Raw Materials Platform (PGG), part of the Energy Transition in The Netherlands, commissioned the Agricultural Economics Research Institute (LEI) and the Copernicus Institute of Utrecht University to conduct research on the macro-economic impact of large scale deployment of biomass for energy and materials in the Netherlands. Two model approaches were applied based on a consistent set of scenario assumptions: a bottom-up study including technoeconomic projections of fossil and bio-based conversion technologies and a topdown study including macro-economic modelling of (global) trade of biomass and fossil resources. The results of the top-down and bottom-up modelling work are reported separately. The results of the synthesis of the modelling work are presented in this report

  16. Analysis of the Economic Impact of Large-Scale Deployment of Biomass Resources for Energy and Materials in the Netherlands. Appendix 2. Macro-economic Scenarios

    International Nuclear Information System (INIS)

    Banse, M.

    2009-03-01

    The Bio-based Raw Materials Platform (known as PGG), which is part of the Energy Transition programme in the Netherlands, commissioned the Agricultural Economics Research Institute (LEI) and the Copernicus Institute of Utrecht University to study the macro-economic impact of large-scale deployment of biomass for energy and materials in the Netherlands. Two model approaches were applied based on a consistent set of scenario assumptions: a bottom-up study including techno-economic projections of fossil and bio-based conversion technologies and a top-down study including macro-economic modelling of (global) trade of biomass and fossil resources. The results of the top-down study (part 2) including macro-economic modelling of (global) trade of biomass and fossil resources, are presented in this report

  17. Diversity analysis in Cannabis sativa based on large-scale development of expressed sequence tag-derived simple sequence repeat markers.

    Science.gov (United States)

    Gao, Chunsheng; Xin, Pengfei; Cheng, Chaohua; Tang, Qing; Chen, Ping; Wang, Changbiao; Zang, Gonggu; Zhao, Lining

    2014-01-01

    Cannabis sativa L. is an important economic plant for the production of food, fiber, oils, and intoxicants. However, lack of sufficient simple sequence repeat (SSR) markers has limited the development of cannabis genetic research. Here, large-scale development of expressed sequence tag simple sequence repeat (EST-SSR) markers was performed to obtain more informative genetic markers, and to assess genetic diversity in cannabis (Cannabis sativa L.). Based on the cannabis transcriptome, 4,577 SSRs were identified from 3,624 ESTs. From there, a total of 3,442 complementary primer pairs were designed as SSR markers. Among these markers, trinucleotide repeat motifs (50.99%) were the most abundant, followed by hexanucleotide (25.13%), dinucleotide (16.34%), tetranucloetide (3.8%), and pentanucleotide (3.74%) repeat motifs, respectively. The AAG/CTT trinucleotide repeat (17.96%) was the most abundant motif detected in the SSRs. One hundred and seventeen EST-SSR markers were randomly selected to evaluate primer quality in 24 cannabis varieties. Among these 117 markers, 108 (92.31%) were successfully amplified and 87 (74.36%) were polymorphic. Forty-five polymorphic primer pairs were selected to evaluate genetic diversity and relatedness among the 115 cannabis genotypes. The results showed that 115 varieties could be divided into 4 groups primarily based on geography: Northern China, Europe, Central China, and Southern China. Moreover, the coefficient of similarity when comparing cannabis from Northern China with the European group cannabis was higher than that when comparing with cannabis from the other two groups, owing to a similar climate. This study outlines the first large-scale development of SSR markers for cannabis. These data may serve as a foundation for the development of genetic linkage, quantitative trait loci mapping, and marker-assisted breeding of cannabis.

  18. Web-based NGS data analysis using miRMaster: a large-scale meta-analysis of human miRNAs.

    Science.gov (United States)

    Fehlmann, Tobias; Backes, Christina; Kahraman, Mustafa; Haas, Jan; Ludwig, Nicole; Posch, Andreas E; Würstle, Maximilian L; Hübenthal, Matthias; Franke, Andre; Meder, Benjamin; Meese, Eckart; Keller, Andreas

    2017-09-06

    The analysis of small RNA NGS data together with the discovery of new small RNAs is among the foremost challenges in life science. For the analysis of raw high-throughput sequencing data we implemented the fast, accurate and comprehensive web-based tool miRMaster. Our toolbox provides a wide range of modules for quantification of miRNAs and other non-coding RNAs, discovering new miRNAs, isomiRs, mutations, exogenous RNAs and motifs. Use-cases comprising hundreds of samples are processed in less than 5 h with an accuracy of 99.4%. An integrative analysis of small RNAs from 1836 data sets (20 billion reads) indicated that context-specific miRNAs (e.g. miRNAs present only in one or few different tissues / cell types) still remain to be discovered while broadly expressed miRNAs appear to be largely known. In total, our analysis of known and novel miRNAs indicated nearly 22 000 candidates of precursors with one or two mature forms. Based on these, we designed a custom microarray comprising 11 872 potential mature miRNAs to assess the quality of our prediction. MiRMaster is a convenient-to-use tool for the comprehensive and fast analysis of miRNA NGS data. In addition, our predicted miRNA candidates provided as custom array will allow researchers to perform in depth validation of candidates interesting to them. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. Multi-element Analysis of variable sample matrices using collision/reaction cell inductively coupled plasma mass spectrometry

    International Nuclear Information System (INIS)

    Zahran, N.F.; Helal, A.I.; Amr, M.A.; Amr, M.A.; Al-saad, K.A.

    2008-01-01

    An ICP-MS with an octopole reaction/collision cell is used for the multielement determination of trace elements in water, plants, and soil samples. The use of a reaction or collision gas reduces serious spectral interferences from matrix elements such as Ar Cl or Ar Na. The background equivalent concentration (BEC) is reduced one order of magnitude at helium flow rate of 1 mL/min. Certified reference material namely , NIST Water-1643d, Tomato leaves 1573a, and Montana soil 2711 are used. The trace elements Mn, Fe, Co, Ni, Cu, Zn, As, Mo, Cd and Pb are determined in the different matrices with a accuracy better than 8% to the certified values

  20. Multielement analysis of environmental samples by total-reflection X-ray fluorescence sprectrometry, neutron activation analysis and inductively coupled plasma optical emission spectroscopy

    International Nuclear Information System (INIS)

    Michaelis, W.

    1986-01-01

    In environmental research and protection trace elements have to be determined over a wide range of atomic number, down to very low concentrations, and in quite different matrices. This challenge requires the availability of complementary analytical methods characterized by a high detection power and few sources of systematic errors. Besides, the capacity of multielement detection is often desired since it facilitates the talking of many problems in which numerous trace elements are of direct concern. Total-reflection X-ray fluorescence, neutron activation analysis and inductively coupled plasma optical emission spectroscopy, in principle fulfill these requirements quite well. However, each method has its domain, and the application to certain sample species may be less promising. Under this aspect, the paper summarizes some recent developments and investigations, including intercomparisons as far as possible. Various matrices are considered : rainwater and airborne particulates, soil samples, river sediments and suspended particulate matter, river water filtrates, ozean water, and organic matrices. Capabilities and limitations are discussed. Sample preparation techniques are described if they are new or essential for achieving the results given. (orig.) [de

  1. Analysis of the Economic Impact of Large-Scale Deployment of Biomass Resources for Energy and Materials in the Netherlands. Appendix 1. Bottom-up Scenarios

    International Nuclear Information System (INIS)

    Hoefnagels, R.; Dornburg, V.; Faaij, A.; Banse, M.

    2009-03-01

    The Bio-based Raw Materials Platform (PGG), part of the Energy Transition in The Netherlands, commissioned the Agricultural Economics Research Institute (LEI) and the Copernicus Institute of Utrecht University to conduct research on the macro-economic impact of large scale deployment of biomass for energy and materials in the Netherlands. Two model approaches were applied based on a consistent set of scenario assumptions: a bottom-up study including technoeconomic projections of fossil and bio-based conversion technologies and a topdown study including macro-economic modelling of (global) trade of biomass and fossil resources. The results of the top-down and bottom-up modelling work are reported separately. The results of the synthesis of the modelling work are presented in the main report. This report (part 1) presents scenarios for future biomass use for energy and materials, and analyses the consequences on energy supply, chemical productions, costs and greenhouse gas (GHG) emissions with a bottom-up approach. The bottom-up projections, as presented in this report, form the basis for modelling work using the top-down macro-economic model (LEITAP) to assess the economic impact of substituting fossil-based energy carriers with biomass in the Netherlands. The results of the macro-economic modelling work, and the linkage between the results of the bottom-up and top-down work, will be presented in the top-down economic part and synthesis report of this study

  2. Ethics of large-scale change

    OpenAIRE

    Arler, Finn

    2006-01-01

      The subject of this paper is long-term large-scale changes in human society. Some very significant examples of large-scale change are presented: human population growth, human appropriation of land and primary production, the human use of fossil fuels, and climate change. The question is posed, which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, th...

  3. An analysis of the energy efficiency of winter rapeseed biomass under different farming technologies. A case study of a large-scale farm in Poland

    International Nuclear Information System (INIS)

    Budzyński, Wojciech Stefan; Jankowski, Krzysztof Józef; Jarocki, Marcin

    2015-01-01

    The article presents the results of a three-year study investigating the impact of production technology on the energy efficiency of winter rapeseed produced in large-scale farms. Rapeseed biomass produced in a high-input system was characterized by the highest energy demand (30.00 GJ ha"−"1). The energy demand associated with medium-input and low-input systems was 20% and 34% lower, respectively. The highest energy value of oil, oil cake and straw was noted in winter rapeseed produced in the high-input system. In the total energy output (268.5 GJ ha"−"1), approximately 17% of energy was accumulated in oil, 20% in oil cake, and 63% in straw. In lower input systems, the energy output of oil decreased by 13–23%, the energy output of oil cake – by 6–16%, and the energy output of straw – by 29–37% without visible changes in the structure of energy accumulated in different components of rapeseed biomass. The highest energy gain was observed in the high-input system. The low-input system was characterized by the highest energy efficiency ratio, at 4.22 for seeds and 9.43 for seeds and straw. The increase in production intensity reduced the energy efficiency of rapeseed biomass production by 8–18% (seeds) and 5–9% (seeds and straw). - Highlights: • Energy inputs in the high-input production system reached 30 GJ ha"−"1. • Energy inputs in the medium- and low-input systems were reduced by 20% and 34%. • Energy gain in the high-input system was 15% and 42% higher than in other systems. • Energy ratio in the high-input system was 5–18% lower than in the low-input system.

  4. Large scale study of tooth enamel

    International Nuclear Information System (INIS)

    Bodart, F.; Deconninck, G.; Martin, M.T.

    Human tooth enamel contains traces of foreign elements. The presence of these elements is related to the history and the environment of the human body and can be considered as the signature of perturbations which occur during the growth of a tooth. A map of the distribution of these traces on a large scale sample of the population will constitute a reference for further investigations of environmental effects. On hundred eighty samples of teeth were first analyzed using PIXE, backscattering and nuclear reaction techniques. The results were analyzed using statistical methods. Correlations between O, F, Na, P, Ca, Mn, Fe, Cu, Zn, Pb and Sr were observed and cluster analysis was in progress. The techniques described in the present work have been developed in order to establish a method for the exploration of very large samples of the Belgian population. (author)

  5. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  6. Automating large-scale reactor systems

    International Nuclear Information System (INIS)

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig

  7. Decentralized Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2013-01-01

    problem is formulated as a centralized large-scale optimization problem but is then decomposed into smaller subproblems that are solved locally by each unit connected to an aggregator. For large-scale systems the method is faster than solving the full problem and can be distributed to include an arbitrary...

  8. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  9. Youth Mental Health Services Utilization Rates After a Large-Scale Social Media Campaign: Population-Based Interrupted Time-Series Analysis.

    Science.gov (United States)

    Booth, Richard G; Allen, Britney N; Bray Jenkyn, Krista M; Li, Lihua; Shariff, Salimah Z

    2018-04-06

    Despite the uptake of mass media campaigns, their overall impact remains unclear. Since 2011, a Canadian telecommunications company has operated an annual, large-scale mental health advocacy campaign (Bell Let's Talk) focused on mental health awareness and stigma reduction. In February 2012, the campaign began to explicitly leverage the social media platform Twitter and incented participation from the public by promising donations of Can $0.05 for each interaction with a campaign-specific username (@Bell_LetsTalk). The intent of the study was to examine the impact of this 2012 campaign on youth outpatient mental health services in the province of Ontario, Canada. Monthly outpatient mental health visits (primary health care and psychiatric services) were obtained for the Ontario youth aged 10 to 24 years (approximately 5.66 million visits) from January 1, 2006 to December 31, 2015. Interrupted time series, autoregressive integrated moving average modeling was implemented to evaluate the impact of the campaign on rates of monthly outpatient mental health visits. A lagged intervention date of April 1, 2012 was selected to account for the delay required for a patient to schedule and attend a mental health-related physician visit. The inclusion of Twitter into the 2012 Bell Let's Talk campaign was temporally associated with an increase in outpatient mental health utilization for both males and females. Within primary health care environments, female adolescents aged 10 to 17 years experienced a monthly increase in the mental health visit rate from 10.2/1000 in April 2006 to 14.1/1000 in April 2015 (slope change of 0.094 following campaign, Pcampaign, Pcampaign (slope change of 0.005, P=.02; slope change of 0.003, P=.005, respectively). For young adults aged 18 to 24 years, females who used primary health care experienced the most significant increases in mental health visit rates from 26.5/1000 in April 2006 to 29.2/1000 in April 2015 (slope change of 0.17 following

  10. Laser ablation-inductively coupled plasma-dynamic reaction cell-mass spectrometry for the multi-element analysis of polymers

    International Nuclear Information System (INIS)

    Resano, M.; Garcia-Ruiz, E.; Vanhaecke, F.

    2005-01-01

    In this work, the potential of laser ablation-inductively coupled plasma-mass spectrometry for the fast analysis of polymers has been explored. Different real-life samples (polyethylene shopping bags, an acrylonitrile butadiene styrene material and various plastic bricks) as well as several reference materials (VDA 001 to 004, Cd in polyethylene) have been selected for the study. Two polyethylene reference materials (ERM-EC 680 and 681), for which a reference or indicative value for the most relevant metals is available, have proved their suitability as standards for calibration. Special attention has been paid to the difficulties expected for the determination of Cr at the μg g -1 level in this kind of materials, due to the interference of ArC + ions on the most abundant isotopes of Cr. The use of ammonia as a reaction gas in a dynamic reaction cell is shown to alleviate this problem, resulting in a limit of detection of 0.15 μg g -1 for this element, while limiting only modestly the possibilities of the technique for simultaneous multi-element analysis. In this regard, As is the analyte most seriously affected by the use of ammonia, and its determination has to be carried out in vented mode, at the expense of measuring time. In all cases studied, accurate results could be obtained for elements ranging in content from the sub-μg g -1 level to tens of thousands of μg g -1 . However, the use of an element of known concentration as internal standard may be needed for materials with a matrix significantly different from that of the standard (polyethylene in this work). Precision ranged between 5% and 10% RSD for elements found at the 10 μg g -1 level or higher, while this value could deteriorate to 20% for analytes found at the sub-μg g -1 level. Overall, the technique evaluated presents many advantages for the fast and accurate multi-element analysis of these materials, avoiding laborious digestion procedures and minimizing the risk of analyte losses due to the

  11. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  12. Large-scale numerical simulations of plasmas

    International Nuclear Information System (INIS)

    Hamaguchi, Satoshi

    2004-01-01

    The recent trend of large scales simulations of fusion plasma and processing plasmas is briefly summarized. Many advanced simulation techniques have been developed for fusion plasmas and some of these techniques are now applied to analyses of processing plasmas. (author)

  13. Superconducting materials for large scale applications

    International Nuclear Information System (INIS)

    Dew-Hughes, D.

    1975-01-01

    Applications of superconductors capable of carrying large current densities in large-scale electrical devices are examined. Discussions are included on critical current density, superconducting materials available, and future prospects for improved superconducting materials. (JRD)

  14. Large-scale computing with Quantum Espresso

    International Nuclear Information System (INIS)

    Giannozzi, P.; Cavazzoni, C.

    2009-01-01

    This paper gives a short introduction to Quantum Espresso: a distribution of software for atomistic simulations in condensed-matter physics, chemical physics, materials science, and to its usage in large-scale parallel computing.

  15. Large-scale regions of antimatter

    International Nuclear Information System (INIS)

    Grobov, A. V.; Rubin, S. G.

    2015-01-01

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era

  16. Large-scale regions of antimatter

    Energy Technology Data Exchange (ETDEWEB)

    Grobov, A. V., E-mail: alexey.grobov@gmail.com; Rubin, S. G., E-mail: sgrubin@mephi.ru [National Research Nuclear University MEPhI (Russian Federation)

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  17. Large-scale grid management; Storskala Nettforvaltning

    Energy Technology Data Exchange (ETDEWEB)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-07-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series.

  18. Political consultation and large-scale research

    International Nuclear Information System (INIS)

    Bechmann, G.; Folkers, H.

    1977-01-01

    Large-scale research and policy consulting have an intermediary position between sociological sub-systems. While large-scale research coordinates science, policy, and production, policy consulting coordinates science, policy and political spheres. In this very position, large-scale research and policy consulting lack of institutional guarantees and rational back-ground guarantee which are characteristic for their sociological environment. This large-scale research can neither deal with the production of innovative goods under consideration of rentability, nor can it hope for full recognition by the basis-oriented scientific community. Policy consulting knows neither the competence assignment of the political system to make decisions nor can it judge succesfully by the critical standards of the established social science, at least as far as the present situation is concerned. This intermediary position of large-scale research and policy consulting has, in three points, a consequence supporting the thesis which states that this is a new form of institutionalization of science: These are: 1) external control, 2) the organization form, 3) the theoretical conception of large-scale research and policy consulting. (orig.) [de

  19. A Short Survey on the State of the Art in Architectures and Platforms for Large Scale Data Analysis and Knowledge Discovery from Data

    Energy Technology Data Exchange (ETDEWEB)

    Begoli, Edmon [ORNL

    2012-01-01

    Intended as a survey for practicing architects and researchers seeking an overview of the state-of-the-art architectures for data analysis, this paper provides an overview of the emerg- ing data management and analytic platforms including par- allel databases, Hadoop-based systems, High Performance Computing (HPC) platforms and platforms popularly re- ferred to as NoSQL platforms. Platforms are presented based on their relevance, analysis they support and the data organization model they support.

  20. Multi-element analysis of the rat hippocampus by proton induced x-ray emission spectroscopy (phosphorus, sulfur, chlorine, potassium, calcium, iron, zinc, copper, lead, bromine, and rubidium)

    Energy Technology Data Exchange (ETDEWEB)

    Kemp, K.; Danscher, G.

    1979-01-22

    A technique for multi-element analysis of brain tissue by proton induced x-ray emission spectroscopy (PIXE) is described and data from analysis of fixed and unfixed samples from rat hippocampus, neocortex, amygdala, and spinal cord are presented and commented on. The atoms present in the tissue are bombarded with protons which cause the ejection of electrons from the inner shells. When the holes are refilled with electrons from outer shells, x-ray quanta characteristic for each element are emitted. Using a high resolution energy dispersive detector, a complete x-ray spectrum of the specimen can be recorded in a single measurement. Detection limits less than or approximately 5 ppM of dry matter are obtained for most elements with atomic number greater than 14 (silicon). Around 13 elements were found in concentrations above the detection limits. The grand means for non-fixed hippocampi were e.g., for Zn-120 ppM; Rb-20 ppM; Fe-150 ppM; Pb-3 ppM; Ni-5 ppM.

  1. Comment on 'Large-Scale Cognitive GWAS Meta-Analysis Reveals Tissue-Specific Neural Expression and Potential Nootropic Drug Targets' by Lam et al.

    Science.gov (United States)

    Hill, W David

    2018-04-01

    Intelligence and educational attainment are strongly genetically correlated. This relationship can be exploited by Multi-Trait Analysis of GWAS (MTAG) to add power to Genome-wide Association Studies (GWAS) of intelligence. MTAG allows the user to meta-analyze GWASs of different phenotypes, based on their genetic correlations, to identify association's specific to the trait of choice. An MTAG analysis using GWAS data sets on intelligence and education was conducted by Lam et al. (2017). Lam et al. (2017) reported 70 loci that they described as 'trait specific' to intelligence. This article examines whether the analysis conducted by Lam et al. (2017) has resulted in genetic information about a phenotype that is more similar to education than intelligence.

  2. Skewness of citation impact data and covariates of citation distributions: A large-scale empirical analysis based on Web of Science data

    NARCIS (Netherlands)

    Bornmann, L.; Leydesdorff, L.

    Using percentile shares, one can visualize and analyze the skewness in bibliometric data across disciplines and over time. The resulting figures can be intuitively interpreted and are more suitable for detailed analysis of the effects of independent and control variables on distributions than

  3. Large scale international replication and meta-analysis study confirms association of the 15q14 locus with myopia. The CREAM consortium

    NARCIS (Netherlands)

    Verhoeven, Virginie J. M.; Hysi, Pirro G.; Saw, Seang-Mei; Vitart, Veronique; Mirshahi, Alireza; Guggenheim, Jeremy A.; Cotch, Mary Frances; Yamashiro, Kenji; Baird, Paul N.; Mackey, David A.; Wojciechowski, Robert; Ikram, M. Kamran; Hewitt, Alex W.; Duggal, Priya; Janmahasatian, Sarayut; Khor, Chiea-Chuen; Fan, Qiao; Zhou, Xin; Young, Terri L.; Tai, E