WorldWideScience

Sample records for large scale method

  1. Methods for Large-Scale Nonlinear Optimization.

    Science.gov (United States)

    1980-05-01

    STANFORD, CALIFORNIA 94305 METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION by Philip E. Gill, Waiter Murray, I Michael A. Saunden, and Masgaret H. Wright...typical iteration can be partitioned so that where B is an m X m basise matrix. This partition effectively divides the vari- ables into three classes... attention is given to the standard of the coding or the documentation. A much better way of obtaining mathematical software is from a software library

  2. Large-scale synthesis of YSZ nanopowder by Pechini method

    Indian Academy of Sciences (India)

    Administrator

    structure and chemical purity of 99⋅1% by inductively coupled plasma optical emission spectroscopy on a large scale. Keywords. Sol–gel; yttria-stabilized zirconia; large scale; nanopowder; Pechini method. 1. Introduction. Zirconia has attracted the attention of many scientists because of its tremendous thermal, mechanical ...

  3. Optimization of large-scale industrial systems : an emerging method

    Energy Technology Data Exchange (ETDEWEB)

    Hammache, A.; Aube, F.; Benali, M.; Cantave, R. [Natural Resources Canada, Varennes, PQ (Canada). CANMET Energy Technology Centre

    2006-07-01

    This paper reviewed optimization methods of large-scale industrial production systems and presented a novel systematic multi-objective and multi-scale optimization methodology. The methodology was based on a combined local optimality search with global optimality determination, and advanced system decomposition and constraint handling. The proposed method focused on the simultaneous optimization of the energy, economy and ecology aspects of industrial systems (E{sup 3}-ISO). The aim of the methodology was to provide guidelines for decision-making strategies. The approach was based on evolutionary algorithms (EA) with specifications including hybridization of global optimality determination with a local optimality search; a self-adaptive algorithm to account for the dynamic changes of operating parameters and design variables occurring during the optimization process; interactive optimization; advanced constraint handling and decomposition strategy; and object-oriented programming and parallelization techniques. Flowcharts of the working principles of the basic EA were presented. It was concluded that the EA uses a novel decomposition and constraint handling technique to enhance the Pareto solution search procedure for multi-objective problems. 6 refs., 9 figs.

  4. The Large-Scale Structure of Scientific Method

    Science.gov (United States)

    Kosso, Peter

    2009-01-01

    The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…

  5. Newton Methods for Large Scale Problems in Machine Learning

    Science.gov (United States)

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  6. Gamma Ray Tomographic Scan Method for Large Scale Industrial Plants

    International Nuclear Information System (INIS)

    Moon, Jin Ho; Jung, Sung Hee; Kim, Jong Bum; Park, Jang Geun

    2011-01-01

    The gamma ray tomography systems have been used to investigate a chemical process for last decade. There have been many cases of gamma ray tomography for laboratory scale work but not many cases for industrial scale work. Non-tomographic equipment with gamma-ray sources is often used in process diagnosis. Gamma radiography, gamma column scanning and the radioisotope tracer technique are examples of gamma ray application in industries. In spite of many outdoor non-gamma ray tomographic equipment, the most of gamma ray tomographic systems still remained as indoor equipment. But, as the gamma tomography has developed, the demand on gamma tomography for real scale plants also increased. To develop the industrial scale system, we introduced the gamma-ray tomographic system with fixed detectors and rotating source. The general system configuration is similar to 4 th generation geometry. But the main effort has been made to actualize the instant installation of the system for real scale industrial plant. This work would be a first attempt to apply the 4th generation industrial gamma tomographic scanning by experimental method. The individual 0.5-inch NaI detector was used for gamma ray detection by configuring circular shape around industrial plant. This tomographic scan method can reduce mechanical complexity and require a much smaller space than a conventional CT. Those properties make it easy to get measurement data for a real scale plant

  7. Features of the method of large-scale paleolandscape reconstructions

    Science.gov (United States)

    Nizovtsev, Vyacheslav; Erman, Natalia; Graves, Irina

    2017-04-01

    The method of paleolandscape reconstructions was tested in the key area of the basin of the Central Dubna, located at the junction of the Taldom and Sergiev Posad districts of the Moscow region. A series of maps was created which shows paleoreconstructions of the original (indigenous) living environment of initial settlers during main time periods of the Holocene age and features of human interaction with landscapes at the early stages of economic development of the territory (in the early and middle Holocene). The sequence of these works is as follows. 1. Comprehensive analysis of topographic maps of different scales and aerial and satellite images, stock materials of geological and hydrological surveys and prospecting of peat deposits, archaeological evidence on ancient settlements, palynological and osteological analysis, analysis of complex landscape and archaeological studies. 2. Mapping of factual material and analyzing of the spatial distribution of archaeological sites were performed. 3. Running of a large-scale field landscape mapping (sample areas) and compiling of maps of the modern landscape structure. On this basis, edaphic properties of the main types of natural boundaries were analyzed and their resource base was determined. 4. Reconstruction of lake-river system during the main periods of the Holocene. The boundaries of restored paleolakes were determined based on power and territorial confinement of decay ooze. 5. On the basis of landscape and edaphic method the actual paleolandscape reconstructions for the main periods of the Holocene were performed. During the reconstructions of the original, indigenous flora we relied on data of palynological studies conducted on the studied area or in similar landscape conditions. 6. The result was a retrospective analysis and periodization of the settlement process, economic development and the formation of the first anthropogenically transformed landscape complexes. The reconstruction of the dynamics of the

  8. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  9. Kernel methods for large-scale genomic data analysis

    Science.gov (United States)

    Xing, Eric P.; Schaid, Daniel J.

    2015-01-01

    Machine learning, particularly kernel methods, has been demonstrated as a promising new tool to tackle the challenges imposed by today’s explosive data growth in genomics. They provide a practical and principled approach to learning how a large number of genetic variants are associated with complex phenotypes, to help reveal the complexity in the relationship between the genetic markers and the outcome of interest. In this review, we highlight the potential key role it will have in modern genomic data processing, especially with regard to integration with classical methods for gene prioritizing, prediction and data fusion. PMID:25053743

  10. LARGE SCALE METHOD FOR THE PRODUCTION AND PURIFICATION OF CURIUM

    Science.gov (United States)

    Higgins, G.H.; Crane, W.W.T.

    1959-05-19

    A large-scale process for production and purification of Cm/sup 242/ is described. Aluminum slugs containing Am are irradiated and declad in a NaOH-- NaHO/sub 3/ solution at 85 to 100 deg C. The resulting slurry filtered and washed with NaOH, NH/sub 4/OH, and H/sub 2/O. Recovery of Cm from filtrate and washings is effected by an Fe(OH)/sub 3/ precipitation. The precipitates are then combined and dissolved ln HCl and refractory oxides centrifuged out. These oxides are then fused with Na/sub 2/CO/sub 3/ and dissolved in HCl. The solution is evaporated and LiCl solution added. The Cm, rare earths, and anionic impurities are adsorbed on a strong-base anfon exchange resin. Impurities are eluted with LiCl--HCl solution, rare earths and Cm are eluted by HCl. Other ion exchange steps further purify the Cm. The Cm is then precipitated as fluoride and used in this form or further purified and processed. (T.R.H.)

  11. The linearly scaling 3D fragment method for large scale electronic structure calculations

    Energy Technology Data Exchange (ETDEWEB)

    Zhao Zhengji [National Energy Research Scientific Computing Center (NERSC) (United States); Meza, Juan; Shan Hongzhang; Strohmaier, Erich; Bailey, David; Wang Linwang [Computational Research Division, Lawrence Berkeley National Laboratory (United States); Lee, Byounghak, E-mail: ZZhao@lbl.go [Physics Department, Texas State University (United States)

    2009-07-01

    The linearly scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) at OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.

  12. Interior Point Methods for Large-Scale Nonlinear Programming

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2005-01-01

    Roč. 20, č. 4-5 (2005), s. 569-582 ISSN 1055-6788 R&D Projects: GA AV ČR IAA1030405 Institutional research plan: CEZ:AV0Z10300504 Keywords : nonlinear programming * interior point methods * KKT systems * indefinite preconditioners * filter methods * algorithms Subject RIV: BA - General Mathematics Impact factor: 0.477, year: 2005

  13. Mathematical programming methods for large-scale topology optimization problems

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana

    for mechanical problems, but has rapidly extended to many other disciplines, such as fluid dynamics and biomechanical problems. However, the novelty and improvements of optimization methods has been very limited. It is, indeed, necessary to develop of new optimization methods to improve the final designs......, and at the same time, reduce the number of function evaluations. Nonlinear optimization methods, such as sequential quadratic programming and interior point solvers, have almost not been embraced by the topology optimization community. Thus, this work is focused on the introduction of this kind of second...... for the classical minimum compliance problem. Two of the state-of-the-art optimization algorithms are investigated and implemented for this structural topology optimization problem. A Sequential Quadratic Programming (TopSQP) and an interior point method (TopIP) are developed exploiting the specific mathematical...

  14. Large-scale atomic calculations using variational methods

    Energy Technology Data Exchange (ETDEWEB)

    Joensson, Per

    1995-01-01

    Atomic properties, such as radiative lifetimes, hyperfine structures and isotope shift, have been studied both theoretically and experimentally. Computer programs which calculate these properties from multiconfiguration Hartree-Fock (MCHF) and configuration interaction (CI) wave functions have been developed and tested. To study relativistic effects, a program which calculates hyperfine structures from multiconfiguration Dirac-Fock (MCDF) wave functions has also been written. A new method of dealing with radial non-orthogonalities in transition matrix elements has been investigated. This method allows two separate orbital sets to be used for the initial and final states, respectively. It is shown that, once the usual orthogonality restrictions have been overcome, systematic MCHF calculations are able to predict oscillator strengths in light atoms with high accuracy. In connection with recent high-power laser experiments, time-dependent calculations of the atomic response to intense laser fields have been performed. Using the frozen-core approximation, where the atom is modeled as an active electron moving in the average field of the core electrons and the nucleus, the active electron has been propagated in time under the influence of the laser field. Radiative lifetimes and hyperfine structures of excited states in sodium and silver have been experimentally determined using time-resolved laser spectroscopy. By recording the fluorescence light decay following laser excitation in the vacuum ultraviolet spectral region, the radiative lifetimes and hyperfine structures of the 7p{sup 2}P states in silver have been measured. The delayed-coincidence technique has been used to make very accurate measurements of the radiative lifetimes and hyperfine structures of the lowest 2P states in sodium and silver. 77 refs, 2 figs, 14 tabs.

  15. Large-scale atomic calculations using variational methods

    International Nuclear Information System (INIS)

    Joensson, Per.

    1995-01-01

    Atomic properties, such as radiative lifetimes, hyperfine structures and isotope shift, have been studied both theoretically and experimentally. Computer programs which calculate these properties from multiconfiguration Hartree-Fock (MCHF) and configuration interaction (CI) wave functions have been developed and tested. To study relativistic effects, a program which calculates hyperfine structures from multiconfiguration Dirac-Fock (MCDF) wave functions has also been written. A new method of dealing with radial non-orthogonalities in transition matrix elements has been investigated. This method allows two separate orbital sets to be used for the initial and final states, respectively. It is shown that, once the usual orthogonality restrictions have been overcome, systematic MCHF calculations are able to predict oscillator strengths in light atoms with high accuracy. In connection with recent high-power laser experiments, time-dependent calculations of the atomic response to intense laser fields have been performed. Using the frozen-core approximation, where the atom is modeled as an active electron moving in the average field of the core electrons and the nucleus, the active electron has been propagated in time under the influence of the laser field. Radiative lifetimes and hyperfine structures of excited states in sodium and silver have been experimentally determined using time-resolved laser spectroscopy. By recording the fluorescence light decay following laser excitation in the vacuum ultraviolet spectral region, the radiative lifetimes and hyperfine structures of the 7p 2 P states in silver have been measured. The delayed-coincidence technique has been used to make very accurate measurements of the radiative lifetimes and hyperfine structures of the lowest 2P states in sodium and silver. 77 refs, 2 figs, 14 tabs

  16. A large-scale benchmark of gene prioritization methods.

    Science.gov (United States)

    Guala, Dimitri; Sonnhammer, Erik L L

    2017-04-21

    In order to maximize the use of results from high-throughput experimental studies, e.g. GWAS, for identification and diagnostics of new disease-associated genes, it is important to have properly analyzed and benchmarked gene prioritization tools. While prospective benchmarks are underpowered to provide statistically significant results in their attempt to differentiate the performance of gene prioritization tools, a strategy for retrospective benchmarking has been missing, and new tools usually only provide internal validations. The Gene Ontology(GO) contains genes clustered around annotation terms. This intrinsic property of GO can be utilized in construction of robust benchmarks, objective to the problem domain. We demonstrate how this can be achieved for network-based gene prioritization tools, utilizing the FunCoup network. We use cross-validation and a set of appropriate performance measures to compare state-of-the-art gene prioritization algorithms: three based on network diffusion, NetRank and two implementations of Random Walk with Restart, and MaxLink that utilizes network neighborhood. Our benchmark suite provides a systematic and objective way to compare the multitude of available and future gene prioritization tools, enabling researchers to select the best gene prioritization tool for the task at hand, and helping to guide the development of more accurate methods.

  17. A method of orbital analysis for large-scale first-principles simulations

    International Nuclear Information System (INIS)

    Ohwaki, Tsukuru; Otani, Minoru; Ozaki, Taisuke

    2014-01-01

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF 4 )

  18. A fast method for large-scale isolation of phages from hospital ...

    African Journals Online (AJOL)

    This plaque-forming method could be adopted to isolate E. coli phage easily, rapidly and in large quantities. Among the 18 isolated E. coli phages, 10 of them had a broad host range in E. coli and warrant further study. Key words: Escherichia coli phages, large-scale isolation, drug resistance, biological properties.

  19. Task-Management Method Using R-Tree Spatial Cloaking for Large-Scale Crowdsourcing

    Directory of Open Access Journals (Sweden)

    Yan Li

    2017-12-01

    Full Text Available With the development of sensor technology and the popularization of the data-driven service paradigm, spatial crowdsourcing systems have become an important way of collecting map-based location data. However, large-scale task management and location privacy are important factors for participants in spatial crowdsourcing. In this paper, we propose the use of an R-tree spatial cloaking-based task-assignment method for large-scale spatial crowdsourcing. We use an estimated R-tree based on the requested crowdsourcing tasks to reduce the crowdsourcing server-side inserting cost and enable the scalability. By using Minimum Bounding Rectangle (MBR-based spatial anonymous data without exact position data, this method preserves the location privacy of participants in a simple way. In our experiment, we showed that our proposed method is faster than the current method, and is very efficient when the scale is increased.

  20. A family of conjugate gradient methods for large-scale nonlinear equations

    Directory of Open Access Journals (Sweden)

    Dexiang Feng

    2017-09-01

    Full Text Available Abstract In this paper, we present a family of conjugate gradient projection methods for solving large-scale nonlinear equations. At each iteration, it needs low storage and the subproblem can be easily solved. Compared with the existing solution methods for solving the problem, its global convergence is established without the restriction of the Lipschitz continuity on the underlying mapping. Preliminary numerical results are reported to show the efficiency of the proposed method.

  1. A family of conjugate gradient methods for large-scale nonlinear equations.

    Science.gov (United States)

    Feng, Dexiang; Sun, Min; Wang, Xueyong

    2017-01-01

    In this paper, we present a family of conjugate gradient projection methods for solving large-scale nonlinear equations. At each iteration, it needs low storage and the subproblem can be easily solved. Compared with the existing solution methods for solving the problem, its global convergence is established without the restriction of the Lipschitz continuity on the underlying mapping. Preliminary numerical results are reported to show the efficiency of the proposed method.

  2. Methods for large-scale international studies on ICT in education

    NARCIS (Netherlands)

    Pelgrum, W.J.; Plomp, T.; Voogt, Joke; Knezek, G.A.

    2008-01-01

    International comparative assessment is a research method applied for describing and analyzing educational processes and outcomes. They are used to ‘describe the status quo’ in educational systems from an international comparative perspective. This chapter reviews different large scale international

  3. FDTD method for laser absorption in metals for large scale problems.

    Science.gov (United States)

    Deng, Chun; Ki, Hyungson

    2013-10-21

    The FDTD method has been successfully used for many electromagnetic problems, but its application to laser material processing has been limited because even a several-millimeter domain requires a prohibitively large number of grids. In this article, we present a novel FDTD method for simulating large-scale laser beam absorption problems, especially for metals, by enlarging laser wavelength while maintaining the material's reflection characteristics. For validation purposes, the proposed method has been tested with in-house FDTD codes to simulate p-, s-, and circularly polarized 1.06 μm irradiation on Fe and Sn targets, and the simulation results are in good agreement with theoretical predictions.

  4. Methods Dealing with Complexity in Selecting Joint Venture Contractors for Large-Scale Infrastructure Projects

    Directory of Open Access Journals (Sweden)

    Ru Liang

    2018-01-01

    Full Text Available The magnitude of business dynamics has increased rapidly due to increased complexity, uncertainty, and risk of large-scale infrastructure projects. This fact made it increasingly tough to “go alone” into a contractor. As a consequence, joint venture contractors with diverse strengths and weaknesses cooperatively bid for bidding. Understanding project complexity and making decision on the optimal joint venture contractor is challenging. This paper is to study how to select joint venture contractors for undertaking large-scale infrastructure projects based on a multiattribute mathematical model. Two different methods are developed to solve the problem. One is based on ideal points and the other one is based on balanced ideal advantages. Both of the two methods consider individual difference in expert judgment and contractor attributes. A case study of Hong Kong-Zhuhai-Macao-Bridge (HZMB project in China is used to demonstrate how to apply these two methods and their advantages.

  5. The method of arbitrarily large moments to calculate single scale processes in quantum field theory

    Energy Technology Data Exchange (ETDEWEB)

    Bluemlein, Johannes [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Schneider, Carsten [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation (RISC)

    2017-01-15

    We device a new method to calculate a large number of Mellin moments of single scale quantities using the systems of differential and/or difference equations obtained by integration-by-parts identities between the corresponding Feynman integrals of loop corrections to physical quantities. These scalar quantities have a much simpler mathematical structure than the complete quantity. A sufficiently large set of moments may even allow the analytic reconstruction of the whole quantity considered, holding in case of first order factorizing systems. In any case, one may derive highly precise numerical representations in general using this method, which is otherwise completely analytic.

  6. The method of arbitrarily large moments to calculate single scale processes in quantum field theory

    Directory of Open Access Journals (Sweden)

    Johannes Blümlein

    2017-08-01

    Full Text Available We devise a new method to calculate a large number of Mellin moments of single scale quantities using the systems of differential and/or difference equations obtained by integration-by-parts identities between the corresponding Feynman integrals of loop corrections to physical quantities. These scalar quantities have a much simpler mathematical structure than the complete quantity. A sufficiently large set of moments may even allow the analytic reconstruction of the whole quantity considered, holding in case of first order factorizing systems. In any case, one may derive highly precise numerical representations in general using this method, which is otherwise completely analytic.

  7. A method of orbital analysis for large-scale first-principles simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ohwaki, Tsukuru [Advanced Materials Laboratory, Nissan Research Center, Nissan Motor Co., Ltd., 1 Natsushima-cho, Yokosuka, Kanagawa 237-8523 (Japan); Otani, Minoru [Nanosystem Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki 305-8568 (Japan); Ozaki, Taisuke [Research Center for Simulation Science (RCSS), Japan Advanced Institute of Science and Technology (JAIST), 1-1 Asahidai, Nomi, Ishikawa 923-1292 (Japan)

    2014-06-28

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF{sub 4})

  8. Subspace Barzilai-Borwein Gradient Method for Large-Scale Bound Constrained Optimization

    International Nuclear Information System (INIS)

    Xiao Yunhai; Hu Qingjie

    2008-01-01

    An active set subspace Barzilai-Borwein gradient algorithm for large-scale bound constrained optimization is proposed. The active sets are estimated by an identification technique. The search direction consists of two parts: some of the components are simply defined; the other components are determined by the Barzilai-Borwein gradient method. In this work, a nonmonotone line search strategy that guarantees global convergence is used. Preliminary numerical results show that the proposed method is promising, and competitive with the well-known method SPG on a subset of bound constrained problems from CUTEr collection

  9. Test methods of total dose effects in very large scale integrated circuits

    International Nuclear Information System (INIS)

    He Chaohui; Geng Bin; He Baoping; Yao Yujuan; Li Yonghong; Peng Honglun; Lin Dongsheng; Zhou Hui; Chen Yusheng

    2004-01-01

    A kind of test method of total dose effects (TDE) is presented for very large scale integrated circuits (VLSI). The consumption current of devices is measured while function parameters of devices (or circuits) are measured. Then the relation between data errors and consumption current can be analyzed and mechanism of TDE in VLSI can be proposed. Experimental results of 60 Co γ TDEs are given for SRAMs, EEPROMs, FLASH ROMs and a kind of CPU

  10. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  11. Large scale electrolysers

    International Nuclear Information System (INIS)

    B Bello; M Junker

    2006-01-01

    Hydrogen production by water electrolysis represents nearly 4 % of the world hydrogen production. Future development of hydrogen vehicles will require large quantities of hydrogen. Installation of large scale hydrogen production plants will be needed. In this context, development of low cost large scale electrolysers that could use 'clean power' seems necessary. ALPHEA HYDROGEN, an European network and center of expertise on hydrogen and fuel cells, has performed for its members a study in 2005 to evaluate the potential of large scale electrolysers to produce hydrogen in the future. The different electrolysis technologies were compared. Then, a state of art of the electrolysis modules currently available was made. A review of the large scale electrolysis plants that have been installed in the world was also realized. The main projects related to large scale electrolysis were also listed. Economy of large scale electrolysers has been discussed. The influence of energy prices on the hydrogen production cost by large scale electrolysis was evaluated. (authors)

  12. Research on Large-Scale Road Network Partition and Route Search Method Combined with Traveler Preferences

    Directory of Open Access Journals (Sweden)

    De-Xin Yu

    2013-01-01

    Full Text Available Combined with improved Pallottino parallel algorithm, this paper proposes a large-scale route search method, which considers travelers’ route choice preferences. And urban road network is decomposed into multilayers effectively. Utilizing generalized travel time as road impedance function, the method builds a new multilayer and multitasking road network data storage structure with object-oriented class definition. Then, the proposed path search algorithm is verified by using the real road network of Guangzhou city as an example. By the sensitive experiments, we make a comparative analysis of the proposed path search method with the current advanced optimal path algorithms. The results demonstrate that the proposed method can increase the road network search efficiency by more than 16% under different search proportion requests, node numbers, and computing process numbers, respectively. Therefore, this method is a great breakthrough in the guidance field of urban road network.

  13. Low rank approximation methods for MR fingerprinting with large scale dictionaries.

    Science.gov (United States)

    Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra

    2018-04-01

    This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  14. Large scale reflood test

    International Nuclear Information System (INIS)

    Hirano, Kemmei; Murao, Yoshio

    1980-01-01

    The large-scale reflood test with a view to ensuring the safety of light water reactors was started in fiscal 1976 based on the special account act for power source development promotion measures by the entrustment from the Science and Technology Agency. Thereafter, to establish the safety of PWRs in loss-of-coolant accidents by joint international efforts, the Japan-West Germany-U.S. research cooperation program was started in April, 1980. Thereupon, the large-scale reflood test is now included in this program. It consists of two tests using a cylindrical core testing apparatus for examining the overall system effect and a plate core testing apparatus for testing individual effects. Each apparatus is composed of the mock-ups of pressure vessel, primary loop, containment vessel and ECCS. The testing method, the test results and the research cooperation program are described. (J.P.N.)

  15. New Distributed Multipole Methods for Accurate Electrostatics for Large-Scale Biomolecular Simultations

    Science.gov (United States)

    Sagui, Celeste

    2006-03-01

    An accurate and numerically efficient treatment of electrostatics is essential for biomolecular simulations, as this stabilizes much of the delicate 3-d structure associated with biomolecules. Currently, force fields such as AMBER and CHARMM assign ``partial charges'' to every atom in a simulation in order to model the interatomic electrostatic forces, so that the calculation of the electrostatics rapidly becomes the computational bottleneck in large-scale simulations. There are two main issues associated with the current treatment of classical electrostatics: (i) how does one eliminate the artifacts associated with the point-charges (e.g., the underdetermined nature of the current RESP fitting procedure for large, flexible molecules) used in the force fields in a physically meaningful way? (ii) how does one efficiently simulate the very costly long-range electrostatic interactions? Recently, we have dealt with both of these challenges as follows. In order to improve the description of the molecular electrostatic potentials (MEPs), a new distributed multipole analysis based on localized functions -- Wannier, Boys, and Edminston-Ruedenberg -- was introduced, which allows for a first principles calculation of the partial charges and multipoles. Through a suitable generalization of the particle mesh Ewald (PME) and multigrid method, one can treat electrostatic multipoles all the way to hexadecapoles all without prohibitive extra costs. The importance of these methods for large-scale simulations will be discussed, and examplified by simulations from polarizable DNA models.

  16. 3D large-scale calculations using the method of characteristics

    International Nuclear Information System (INIS)

    Dahmani, M.; Roy, R.; Koclas, J.

    2004-01-01

    An overview of the computational requirements and the numerical developments made in order to be able to solve 3D large-scale problems using the characteristics method will be presented. To accelerate the MCI solver, efficient acceleration techniques were implemented and parallelization was performed. However, for the very large problems, the size of the tracking file used to store the tracks can still become prohibitive and exceed the capacity of the machine. The new 3D characteristics solver MCG will now be introduced. This methodology is dedicated to solve very large 3D problems (a part or a whole core) without spatial homogenization. In order to eliminate the input/output problems occurring when solving these large problems, we define a new computing scheme that requires more CPU resources than the usual one, based on sweeps over large tracking files. The huge capacity of storage needed in some problems and the related I/O queries needed by the characteristics solver are replaced by on-the-fly recalculation of tracks at each iteration step. Using this technique, large 3D problems are no longer I/O-bound, and distributed CPU resources can be efficiently used. (author)

  17. Symplectic integrators for large scale molecular dynamics simulations: A comparison of several explicit methods

    International Nuclear Information System (INIS)

    Gray, S.K.; Noid, D.W.; Sumpter, B.G.

    1994-01-01

    We test the suitability of a variety of explicit symplectic integrators for molecular dynamics calculations on Hamiltonian systems. These integrators are extremely simple algorithms with low memory requirements, and appear to be well suited for large scale simulations. We first apply all the methods to a simple test case using the ideas of Berendsen and van Gunsteren. We then use the integrators to generate long time trajectories of a 1000 unit polyethylene chain. Calculations are also performed with two popular but nonsymplectic integrators. The most efficient integrators of the set investigated are deduced. We also discuss certain variations on the basic symplectic integration technique

  18. Multigrid preconditioned conjugate-gradient method for large-scale wave-front reconstruction.

    Science.gov (United States)

    Gilles, Luc; Vogel, Curtis R; Ellerbroek, Brent L

    2002-09-01

    We introduce a multigrid preconditioned conjugate-gradient (MGCG) iterative scheme for computing open-loop wave-front reconstructors for extreme adaptive optics systems. We present numerical simulations for a 17-m class telescope with n = 48756 sensor measurement grid points within the aperture, which indicate that our MGCG method has a rapid convergence rate for a wide range of subaperture average slope measurement signal-to-noise ratios. The total computational cost is of order n log n. Hence our scheme provides for fast wave-front simulation and control in large-scale adaptive optics systems.

  19. A Proactive Complex Event Processing Method for Large-Scale Transportation Internet of Things

    OpenAIRE

    Wang, Yongheng; Cao, Kening

    2014-01-01

    The Internet of Things (IoT) provides a new way to improve the transportation system. The key issue is how to process the numerous events generated by IoT. In this paper, a proactive complex event processing method is proposed for large-scale transportation IoT. Based on a multilayered adaptive dynamic Bayesian model, a Bayesian network structure learning algorithm using search-and-score is proposed to support accurate predictive analytics. A parallel Markov decision processes model is design...

  20. An improved method to characterise the modulation of small-scale turbulent by large-scale structures

    Science.gov (United States)

    Agostini, Lionel; Leschziner, Michael; Gaitonde, Datta

    2015-11-01

    A key aspect of turbulent boundary layer dynamics is ``modulation,'' which refers to degree to which the intensity of coherent large-scale structures (LS) cause an amplification or attenuation of the intensity of the small-scale structures (SS) through large-scale-linkage. In order to identify the variation of the amplitude of the SS motion, the envelope of the fluctuations needs to be determined. Mathis et al. (2009) proposed to define this latter by low-pass filtering the modulus of the analytic signal built from the Hilbert transform of SS. The validity of this definition, as a basis for quantifying the modulated SS signal, is re-examined on the basis of DNS data for a channel flow. The analysis shows that the modulus of the analytic signal is very sensitive to the skewness of its PDF, which is dependent, in turn, on the sign of the LS fluctuation and thus of whether these fluctuations are associated with sweeps or ejections. The conclusion is that generating an envelope by use of a low-pass filtering step leads to an important loss of information associated with the effects of the local skewness of the PDF of the SS on the modulation process. An improved Hilbert-transform-based method is proposed to characterize the modulation of SS turbulence by LS structures

  1. Particle generation methods applied in large-scale experiments on aerosol behaviour and source term studies

    International Nuclear Information System (INIS)

    Swiderska-Kowalczyk, M.; Gomez, F.J.; Martin, M.

    1997-01-01

    In aerosol research aerosols of known size, shape, and density are highly desirable because most aerosols properties depend strongly on particle size. However, such constant and reproducible generation of those aerosol particles whose size and concentration can be easily controlled, can be achieved only in laboratory-scale tests. In large scale experiments, different generation methods for various elements and compounds have been applied. This work presents, in a brief from, a review of applications of these methods used in large scale experiments on aerosol behaviour and source term. Description of generation method and generated aerosol transport conditions is followed by properties of obtained aerosol, aerosol instrumentation used, and the scheme of aerosol generation system-wherever it was available. An information concerning aerosol generation particular purposes and reference number(s) is given at the end of a particular case. These methods reviewed are: evaporation-condensation, using a furnace heating and using a plasma torch; atomization of liquid, using compressed air nebulizers, ultrasonic nebulizers and atomization of liquid suspension; and dispersion of powders. Among the projects included in this worked are: ACE, LACE, GE Experiments, EPRI Experiments, LACE-Spain. UKAEA Experiments, BNWL Experiments, ORNL Experiments, MARVIKEN, SPARTA and DEMONA. The aim chemical compounds studied are: Ba, Cs, CsOH, CsI, Ni, Cr, NaI, TeO 2 , UO 2 Al 2 O 3 , Al 2 SiO 5 , B 2 O 3 , Cd, CdO, Fe 2 O 3 , MnO, SiO 2 , AgO, SnO 2 , Te, U 3 O 8 , BaO, CsCl, CsNO 3 , Urania, RuO 2 , TiO 2 , Al(OH) 3 , BaSO 4 , Eu 2 O 3 and Sn. (Author)

  2. Large Scale Simulation of Hydrogen Dispersion by a Stabilized Balancing Domain Decomposition Method

    Directory of Open Access Journals (Sweden)

    Qing-He Yao

    2014-01-01

    Full Text Available The dispersion behaviour of leaking hydrogen in a partially open space is simulated by a balancing domain decomposition method in this work. An analogy of the Boussinesq approximation is employed to describe the connection between the flow field and the concentration field. The linear systems of Navier-Stokes equations and the convection diffusion equation are symmetrized by a pressure stabilized Lagrange-Galerkin method, and thus a balancing domain decomposition method is enabled to solve the interface problem of the domain decomposition system. Numerical results are validated by comparing with the experimental data and available numerical results. The dilution effect of ventilation is investigated, especially at the doors, where flow pattern is complicated and oscillations appear in the past research reported by other researchers. The transient behaviour of hydrogen and the process of accumulation in the partially open space are discussed, and more details are revealed by large scale computation.

  3. A novel bonding method for large scale poly(methyl methacrylate) micro- and nanofluidic chip fabrication

    Science.gov (United States)

    Qu, Xingtian; Li, Jinlai; Yin, Zhifu

    2018-04-01

    Micro- and nanofluidic chips are becoming increasing significance for biological and medical applications. Future advances in micro- and nanofluidics and its utilization in commercial applications depend on the development and fabrication of low cost and high fidelity large scale plastic micro- and nanofluidic chips. However, the majority of the present fabrication methods suffer from a low bonding rate of the chip during thermal bonding process due to air trapping between the substrate and the cover plate. In the present work, a novel bonding technique based on Ar plasma and water treatment was proposed to fully bond the large scale micro- and nanofluidic chips. The influence of Ar plasma parameters on the water contact angle and the effect of bonding conditions on the bonding rate and the bonding strength of the chip were studied. The fluorescence tests demonstrate that the 5 × 5 cm2 poly(methyl methacrylate) chip with 180 nm wide and 180 nm deep nanochannels can be fabricated without any block and leakage by our newly developed method.

  4. Workshop report on large-scale matrix diagonalization methods in chemistry theory institute

    Energy Technology Data Exchange (ETDEWEB)

    Bischof, C.H.; Shepard, R.L.; Huss-Lederman, S. [eds.

    1996-10-01

    The Large-Scale Matrix Diagonalization Methods in Chemistry theory institute brought together 41 computational chemists and numerical analysts. The goal was to understand the needs of the computational chemistry community in problems that utilize matrix diagonalization techniques. This was accomplished by reviewing the current state of the art and looking toward future directions in matrix diagonalization techniques. This institute occurred about 20 years after a related meeting of similar size. During those 20 years the Davidson method continued to dominate the problem of finding a few extremal eigenvalues for many computational chemistry problems. Work on non-diagonally dominant and non-Hermitian problems as well as parallel computing has also brought new methods to bear. The changes and similarities in problems and methods over the past two decades offered an interesting viewpoint for the success in this area. One important area covered by the talks was overviews of the source and nature of the chemistry problems. The numerical analysts were uniformly grateful for the efforts to convey a better understanding of the problems and issues faced in computational chemistry. An important outcome was an understanding of the wide range of eigenproblems encountered in computational chemistry. The workshop covered problems involving self- consistent-field (SCF), configuration interaction (CI), intramolecular vibrational relaxation (IVR), and scattering problems. In atomic structure calculations using the Hartree-Fock method (SCF), the symmetric matrices can range from order hundreds to thousands. These matrices often include large clusters of eigenvalues which can be as much as 25% of the spectrum. However, if Cl methods are also used, the matrix size can be between 10{sup 4} and 10{sup 9} where only one or a few extremal eigenvalues and eigenvectors are needed. Working with very large matrices has lead to the development of

  5. Large-scale Comparative Study of Hi-C-based Chromatin 3D Structure Modeling Methods

    KAUST Repository

    Wang, Cheng

    2018-05-17

    Chromatin is a complex polymer molecule in eukaryotic cells, primarily consisting of DNA and histones. Many works have shown that the 3D folding of chromatin structure plays an important role in DNA expression. The recently proposed Chro- mosome Conformation Capture technologies, especially the Hi-C assays, provide us an opportunity to study how the 3D structures of the chromatin are organized. Based on the data from Hi-C experiments, many chromatin 3D structure modeling methods have been proposed. However, there is limited ground truth to validate these methods and no robust chromatin structure alignment algorithms to evaluate the performance of these methods. In our work, we first made a thorough literature review of 25 publicly available population Hi-C-based chromatin 3D structure modeling methods. Furthermore, to evaluate and to compare the performance of these methods, we proposed a novel data simulation method, which combined the population Hi-C data and single-cell Hi-C data without ad hoc parameters. Also, we designed a global and a local alignment algorithms to measure the similarity between the templates and the chromatin struc- tures predicted by different modeling methods. Finally, the results from large-scale comparative tests indicated that our alignment algorithms significantly outperform the algorithms in literature.

  6. Progresses in application of computational ?uid dynamic methods to large scale wind turbine aerodynamics?

    Institute of Scientific and Technical Information of China (English)

    Zhenyu ZHANG; Ning ZHAO; Wei ZHONG; Long WANG; Bofeng XU

    2016-01-01

    The computational ?uid dynamics (CFD) methods are applied to aerody-namic problems for large scale wind turbines. The progresses including the aerodynamic analyses of wind turbine pro?les, numerical ?ow simulation of wind turbine blades, evalu-ation of aerodynamic performance, and multi-objective blade optimization are discussed. Based on the CFD methods, signi?cant improvements are obtained to predict two/three-dimensional aerodynamic characteristics of wind turbine airfoils and blades, and the vorti-cal structure in their wake ?ows is accurately captured. Combining with a multi-objective genetic algorithm, a 1.5 MW NH-1500 optimized blade is designed with high e?ciency in wind energy conversion.

  7. Application of spectral Lanczos decomposition method to large scale problems arising geophysics

    Energy Technology Data Exchange (ETDEWEB)

    Tamarchenko, T. [Western Atlas Logging Services, Houston, TX (United States)

    1996-12-31

    This paper presents an application of Spectral Lanczos Decomposition Method (SLDM) to numerical modeling of electromagnetic diffusion and elastic waves propagation in inhomogeneous media. SLDM approximates an action of a matrix function as a linear combination of basis vectors in Krylov subspace. I applied the method to model electromagnetic fields in three-dimensions and elastic waves in two dimensions. The finite-difference approximation of the spatial part of differential operator reduces the initial boundary-value problem to a system of ordinary differential equations with respect to time. The solution to this system requires calculating exponential and sine/cosine functions of the stiffness matrices. Large scale numerical examples are in a good agreement with the theoretical error bounds and stability estimates given by Druskin, Knizhnerman, 1987.

  8. Evaluation of Penalized and Nonpenalized Methods for Disease Prediction with Large-Scale Genetic Data

    Directory of Open Access Journals (Sweden)

    Sungho Won

    2015-01-01

    Full Text Available Owing to recent improvement of genotyping technology, large-scale genetic data can be utilized to identify disease susceptibility loci and this successful finding has substantially improved our understanding of complex diseases. However, in spite of these successes, most of the genetic effects for many complex diseases were found to be very small, which have been a big hurdle to build disease prediction model. Recently, many statistical methods based on penalized regressions have been proposed to tackle the so-called “large P and small N” problem. Penalized regressions including least absolute selection and shrinkage operator (LASSO and ridge regression limit the space of parameters, and this constraint enables the estimation of effects for very large number of SNPs. Various extensions have been suggested, and, in this report, we compare their accuracy by applying them to several complex diseases. Our results show that penalized regressions are usually robust and provide better accuracy than the existing methods for at least diseases under consideration.

  9. Two methods for estimating limits to large-scale wind power generation.

    Science.gov (United States)

    Miller, Lee M; Brunsell, Nathaniel A; Mechem, David B; Gans, Fabian; Monaghan, Andrew J; Vautard, Robert; Keith, David W; Kleidon, Axel

    2015-09-08

    Wind turbines remove kinetic energy from the atmospheric flow, which reduces wind speeds and limits generation rates of large wind farms. These interactions can be approximated using a vertical kinetic energy (VKE) flux method, which predicts that the maximum power generation potential is 26% of the instantaneous downward transport of kinetic energy using the preturbine climatology. We compare the energy flux method to the Weather Research and Forecasting (WRF) regional atmospheric model equipped with a wind turbine parameterization over a 10(5) km2 region in the central United States. The WRF simulations yield a maximum generation of 1.1 We⋅m(-2), whereas the VKE method predicts the time series while underestimating the maximum generation rate by about 50%. Because VKE derives the generation limit from the preturbine climatology, potential changes in the vertical kinetic energy flux from the free atmosphere are not considered. Such changes are important at night when WRF estimates are about twice the VKE value because wind turbines interact with the decoupled nocturnal low-level jet in this region. Daytime estimates agree better to 20% because the wind turbines induce comparatively small changes to the downward kinetic energy flux. This combination of downward transport limits and wind speed reductions explains why large-scale wind power generation in windy regions is limited to about 1 We⋅m(-2), with VKE capturing this combination in a comparatively simple way.

  10. Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.

    1987-01-01

    The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case

  11. A high-speed transmission method for large-scale marine seismic prospecting systems

    International Nuclear Information System (INIS)

    KeZhu, Song; Ping, Cao; JunFeng, Yang; FuMing, Ruan

    2012-01-01

    A marine seismic prospecting system is a kind of data acquisition and transmission system with large-scale coverage and synchronous multi-node acquisition. In this kind of system, data transmission is a fundamental and difficult technique. In this paper, a high-speed data-transmission method is proposed, its implications and limitations are discussed, and conclusions are drawn. The method we propose has obvious advantages over traditional techniques with respect to long-distance operation, high speed, and real-time transmission. A marine seismic system with four streamers, each 6000 m long and capable of supporting up to 1920 channels, was designed and built based on this method. The effective transmission baud rate of this system was found to reach up to 240 Mbps, while the minimum sampling interval time was as short as 0.25 ms. This system was found to achieve a good synchronization: 83 ns. Laboratory and in situ experiments showed that this marine-prospecting system could work correctly and robustly, which verifies the feasibility and validity of the method proposed in this paper. In addition to the marine seismic applications, this method can also be used in land seismic applications and certain other transmission applications such as environmental or engineering monitoring systems. (paper)

  12. Deep Hashing Based Fusing Index Method for Large-Scale Image Retrieval

    Directory of Open Access Journals (Sweden)

    Lijuan Duan

    2017-01-01

    Full Text Available Hashing has been widely deployed to perform the Approximate Nearest Neighbor (ANN search for the large-scale image retrieval to solve the problem of storage and retrieval efficiency. Recently, deep hashing methods have been proposed to perform the simultaneous feature learning and the hash code learning with deep neural networks. Even though deep hashing has shown the better performance than traditional hashing methods with handcrafted features, the learned compact hash code from one deep hashing network may not provide the full representation of an image. In this paper, we propose a novel hashing indexing method, called the Deep Hashing based Fusing Index (DHFI, to generate a more compact hash code which has stronger expression ability and distinction capability. In our method, we train two different architecture’s deep hashing subnetworks and fuse the hash codes generated by the two subnetworks together to unify images. Experiments on two real datasets show that our method can outperform state-of-the-art image retrieval applications.

  13. A high-speed transmission method for large-scale marine seismic prospecting systems

    Science.gov (United States)

    KeZhu, Song; Ping, Cao; JunFeng, Yang; FuMing, Ruan

    2012-12-01

    A marine seismic prospecting system is a kind of data acquisition and transmission system with large-scale coverage and synchronous multi-node acquisition. In this kind of system, data transmission is a fundamental and difficult technique. In this paper, a high-speed data-transmission method is proposed, its implications and limitations are discussed, and conclusions are drawn. The method we propose has obvious advantages over traditional techniques with respect to long-distance operation, high speed, and real-time transmission. A marine seismic system with four streamers, each 6000 m long and capable of supporting up to 1920 channels, was designed and built based on this method. The effective transmission baud rate of this system was found to reach up to 240 Mbps, while the minimum sampling interval time was as short as 0.25 ms. This system was found to achieve a good synchronization: 83 ns. Laboratory and in situ experiments showed that this marine-prospecting system could work correctly and robustly, which verifies the feasibility and validity of the method proposed in this paper. In addition to the marine seismic applications, this method can also be used in land seismic applications and certain other transmission applications such as environmental or engineering monitoring systems.

  14. Restoring Eelgrass (Zostera marina) from Seed: A Comparison of Planting Methods for Large-Scale Projects

    National Research Council Canada - National Science Library

    Orth, Robert; Marion, Scott; Granger, Steven; Traber, Michael

    2008-01-01

    Eelgrass (Zostera marina) seeds are being used in a variety of both small- and large-scale restoration activities and have been successfully used to initiate recovery of eelgrass in the Virginia seaside coastal lagoons...

  15. DGDFT: A massively parallel method for large scale density functional theory calculations.

    Science.gov (United States)

    Hu, Wei; Lin, Lin; Yang, Chao

    2015-09-28

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10(-4) Hartree/atom in terms of the error of energy and 6.2 × 10(-4) Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail.

  16. DGDFT: A massively parallel method for large scale density functional theory calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Wei, E-mail: whu@lbl.gov; Yang, Chao, E-mail: cyang@lbl.gov [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Lin, Lin, E-mail: linlin@math.berkeley.edu [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Department of Mathematics, University of California, Berkeley, California 94720 (United States)

    2015-09-28

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10{sup −4} Hartree/atom in terms of the error of energy and 6.2 × 10{sup −4} Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail.

  17. DGDFT: A massively parallel method for large scale density functional theory calculations

    International Nuclear Information System (INIS)

    Hu, Wei; Yang, Chao; Lin, Lin

    2015-01-01

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10 −4 Hartree/atom in terms of the error of energy and 6.2 × 10 −4 Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail

  18. Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends.

    Science.gov (United States)

    Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J

    2017-07-01

    Complex models of biochemical reaction systems have become increasingly common in the systems biology literature. The complexity of such models can present a number of obstacles for their practical use, often making problems difficult to intuit or computationally intractable. Methods of model reduction can be employed to alleviate the issue of complexity by seeking to eliminate those portions of a reaction network that have little or no effect upon the outcomes of interest, hence yielding simplified systems that retain an accurate predictive capacity. This review paper seeks to provide a brief overview of a range of such methods and their application in the context of biochemical reaction network models. To achieve this, we provide a brief mathematical account of the main methods including timescale exploitation approaches, reduction via sensitivity analysis, optimisation methods, lumping, and singular value decomposition-based approaches. Methods are reviewed in the context of large-scale systems biology type models, and future areas of research are briefly discussed.

  19. An algebraic sub-structuring method for large-scale eigenvalue calculation

    International Nuclear Information System (INIS)

    Yang, C.; Gao, W.; Bai, Z.; Li, X.; Lee, L.; Husbands, P.; Ng, E.

    2004-01-01

    We examine sub-structuring methods for solving large-scale generalized eigenvalue problems from a purely algebraic point of view. We use the term 'algebraic sub-structuring' to refer to the process of applying matrix reordering and partitioning algorithms to divide a large sparse matrix into smaller submatrices from which a subset of spectral components are extracted and combined to provide approximate solutions to the original problem. We are interested in the question of which spectral components one should extract from each sub-structure in order to produce an approximate solution to the original problem with a desired level of accuracy. Error estimate for the approximation to the smallest eigenpair is developed. The estimate leads to a simple heuristic for choosing spectral components (modes) from each sub-structure. The effectiveness of such a heuristic is demonstrated with numerical examples. We show that algebraic sub-structuring can be effectively used to solve a generalized eigenvalue problem arising from the simulation of an accelerator structure. One interesting characteristic of this application is that the stiffness matrix produced by a hierarchical vector finite elements scheme contains a null space of large dimension. We present an efficient scheme to deflate this null space in the algebraic sub-structuring process

  20. Fast Bound Methods for Large Scale Simulation with Application for Engineering Optimization

    Science.gov (United States)

    Patera, Anthony T.; Peraire, Jaime; Zang, Thomas A. (Technical Monitor)

    2002-01-01

    In this work, we have focused on fast bound methods for large scale simulation with application for engineering optimization. The emphasis is on the development of techniques that provide both very fast turnaround and a certificate of Fidelity; these attributes ensure that the results are indeed relevant to - and trustworthy within - the engineering context. The bound methodology which underlies this work has many different instantiations: finite element approximation; iterative solution techniques; and reduced-basis (parameter) approximation. In this grant we have, in fact, treated all three, but most of our effort has been concentrated on the first and third. We describe these below briefly - but with a pointer to an Appendix which describes, in some detail, the current "state of the art."

  1. Application of the PSA method to decay heat removal systems in a large scale FBR design

    International Nuclear Information System (INIS)

    Kotake, S.; Satoh, K.; Matsumoto, H.; Sugawara, M.; Sakata, K.; Okabe, A.

    1993-01-01

    The Probabilistic Safety Assessment (PSA) method is applied to a large scale loop-type FBR in its conceptual design stage in order to establish a well-balanced safety. Both the reactor shut down and decay heat removal systems are designed to be highly reliable, e.g. 10 -7 /d. In this paper the results of several reliability analyses concerning the DHRS have been discussed, where the effects of the analytical assumptions, design options, accident managements on the reliability are examined. The reliability is evaluated small enough, since DRACSs consists of four independent loops with sufficient heat removal capacity and both forced and natural circulation capabilities are designed. It is found that the common mode failures for the active components in the DRACS dominate the reliability. The design diversity concerning these components can be effective for the improvements and the accident managements on BOP are also possible by making use of the long grace period in FBR. (author)

  2. Application of the PSA method to decay heat removal systems in a large scale FBR design

    Energy Technology Data Exchange (ETDEWEB)

    Kotake, S; Satoh, K [Japan Atomic Power Company, Otemachi, Chiyoda-ku, Tokyo (Japan); Matsumoto, H; Sugawara, M [Toshiba Corporation (Japan); Sakata, K [Mitsubishi Atomic Power Industries Inc. (Japan); Okabe, A [Hitachi Engineering Co., Ltd. (Japan)

    1993-02-01

    The Probabilistic Safety Assessment (PSA) method is applied to a large scale loop-type FBR in its conceptual design stage in order to establish a well-balanced safety. Both the reactor shut down and decay heat removal systems are designed to be highly reliable, e.g. 10{sup -7}/d. In this paper the results of several reliability analyses concerning the DHRS have been discussed, where the effects of the analytical assumptions, design options, accident managements on the reliability are examined. The reliability is evaluated small enough, since DRACSs consists of four independent loops with sufficient heat removal capacity and both forced and natural circulation capabilities are designed. It is found that the common mode failures for the active components in the DRACS dominate the reliability. The design diversity concerning these components can be effective for the improvements and the accident managements on BOP are also possible by making use of the long grace period in FBR. (author)

  3. Topographic mapping on large-scale tidal flats with an iterative approach on the waterline method

    Science.gov (United States)

    Kang, Yanyan; Ding, Xianrong; Xu, Fan; Zhang, Changkuan; Ge, Xiaoping

    2017-05-01

    Tidal flats, which are both a natural ecosystem and a type of landscape, are of significant importance to ecosystem function and land resource potential. Morphologic monitoring of tidal flats has become increasingly important with respect to achieving sustainable development targets. Remote sensing is an established technique for the measurement of topography over tidal flats; of the available methods, the waterline method is particularly effective for constructing a digital elevation model (DEM) of intertidal areas. However, application of the waterline method is more limited in large-scale, shifting tidal flats areas, where the tides are not synchronized and the waterline is not a quasi-contour line. For this study, a topographical map of the intertidal regions within the Radial Sand Ridges (RSR) along the Jiangsu Coast, China, was generated using an iterative approach on the waterline method. A series of 21 multi-temporal satellite images (18 HJ-1A/B CCD and three Landsat TM/OLI) of the RSR area collected at different water levels within a five month period (31 December 2013-28 May 2014) was used to extract waterlines based on feature extraction techniques and artificial further modification. These 'remotely-sensed waterlines' were combined with the corresponding water levels from the 'model waterlines' simulated by a hydrodynamic model with an initial generalized DEM of exposed tidal flats. Based on the 21 heighted 'remotely-sensed waterlines', a DEM was constructed using the ANUDEM interpolation method. Using this new DEM as the input data, it was re-entered into the hydrodynamic model, and a new round of water level assignment of waterlines was performed. A third and final output DEM was generated covering an area of approximately 1900 km2 of tidal flats in the RSR. The water level simulation accuracy of the hydrodynamic model was within 0.15 m based on five real-time tide stations, and the height accuracy (root mean square error) of the final DEM was 0.182 m

  4. GRAPHICS-IMAGE MIXED METHOD FOR LARGE-SCALE BUILDINGS RENDERING

    Directory of Open Access Journals (Sweden)

    Y. Zhou

    2018-05-01

    Full Text Available Urban 3D model data is huge and unstructured, LOD and Out-of-core algorithm are usually used to reduce the amount of data that drawn in each frame to improve the rendering efficiency. When the scene is large enough, even the complex optimization algorithm is difficult to achieve better results. Based on the traditional study, a novel idea was developed. We propose a graphics and image mixed method for large-scale buildings rendering. Firstly, the view field is divided into several regions, the graphics-image mixed method used to render the scene on both screen and FBO, then blending the FBO with scree. The algorithm is tested on the huge CityGML model data in the urban areas of New York which contained 188195 public building models, and compared with the Cesium platform. The experiment result shows the system was running smoothly. The experimental results confirm that the algorithm can achieve more massive building scene roaming under the same hardware conditions, and can rendering the scene without vision loss.

  5. An efficient and novel computation method for simulating diffraction patterns from large-scale coded apertures on large-scale focal plane arrays

    Science.gov (United States)

    Shrekenhamer, Abraham; Gottesman, Stephen R.

    2012-10-01

    A novel and memory efficient method for computing diffraction patterns produced on large-scale focal planes by largescale Coded Apertures at wavelengths where diffraction effects are significant has been developed and tested. The scheme, readily implementable on portable computers, overcomes the memory limitations of present state-of-the-art simulation codes such as Zemax. The method consists of first calculating a set of reference complex field (amplitude and phase) patterns on the focal plane produced by a single (reference) central hole, extending to twice the focal plane array size, with one such pattern for each Line-of-Sight (LOS) direction and wavelength in the scene, and with the pattern amplitude corresponding to the square-root of the spectral irradiance from each such LOS direction in the scene at selected wavelengths. Next the set of reference patterns is transformed to generate pattern sets for other holes. The transformation consists of a translational pattern shift corresponding to each hole's position offset and an electrical phase shift corresponding to each hole's position offset and incoming radiance's direction and wavelength. The set of complex patterns for each direction and wavelength is then summed coherently and squared for each detector to yield a set of power patterns unique for each direction and wavelength. Finally the set of power patterns is summed to produce the full waveband diffraction pattern from the scene. With this tool researchers can now efficiently simulate diffraction patterns produced from scenes by large-scale Coded Apertures onto large-scale focal plane arrays to support the development and optimization of coded aperture masks and image reconstruction algorithms.

  6. An eigenfunction method for reconstruction of large-scale and high-contrast objects.

    Science.gov (United States)

    Waag, Robert C; Lin, Feng; Varslot, Trond K; Astheimer, Jeffrey P

    2007-07-01

    A multiple-frequency inverse scattering method that uses eigenfunctions of a scattering operator is extended to image large-scale and high-contrast objects. The extension uses an estimate of the scattering object to form the difference between the scattering by the object and the scattering by the estimate of the object. The scattering potential defined by this difference is expanded in a basis of products of acoustic fields. These fields are defined by eigenfunctions of the scattering operator associated with the estimate. In the case of scattering objects for which the estimate is radial, symmetries in the expressions used to reconstruct the scattering potential greatly reduce the amount of computation. The range of parameters over which the reconstruction method works well is illustrated using calculated scattering by different objects. The method is applied to experimental data from a 48-mm diameter scattering object with tissue-like properties. The image reconstructed from measurements has, relative to a conventional B-scan formed using a low f-number at the same center frequency, significantly higher resolution and less speckle, implying that small, high-contrast structures can be demonstrated clearly using the extended method.

  7. Efficient numerical methods for the large-scale, parallel solution of elastoplastic contact problems

    KAUST Repository

    Frohne, Jörg

    2015-08-06

    © 2016 John Wiley & Sons, Ltd. Quasi-static elastoplastic contact problems are ubiquitous in many industrial processes and other contexts, and their numerical simulation is consequently of great interest in accurately describing and optimizing production processes. The key component in these simulations is the solution of a single load step of a time iteration. From a mathematical perspective, the problems to be solved in each time step are characterized by the difficulties of variational inequalities for both the plastic behavior and the contact problem. Computationally, they also often lead to very large problems. In this paper, we present and evaluate a complete set of methods that are (1) designed to work well together and (2) allow for the efficient solution of such problems. In particular, we use adaptive finite element meshes with linear and quadratic elements, a Newton linearization of the plasticity, active set methods for the contact problem, and multigrid-preconditioned linear solvers. Through a sequence of numerical experiments, we show the performance of these methods. This includes highly accurate solutions of a three-dimensional benchmark problem and scaling our methods in parallel to 1024 cores and more than a billion unknowns.

  8. Efficient numerical methods for the large-scale, parallel solution of elastoplastic contact problems

    KAUST Repository

    Frohne, Jö rg; Heister, Timo; Bangerth, Wolfgang

    2015-01-01

    © 2016 John Wiley & Sons, Ltd. Quasi-static elastoplastic contact problems are ubiquitous in many industrial processes and other contexts, and their numerical simulation is consequently of great interest in accurately describing and optimizing production processes. The key component in these simulations is the solution of a single load step of a time iteration. From a mathematical perspective, the problems to be solved in each time step are characterized by the difficulties of variational inequalities for both the plastic behavior and the contact problem. Computationally, they also often lead to very large problems. In this paper, we present and evaluate a complete set of methods that are (1) designed to work well together and (2) allow for the efficient solution of such problems. In particular, we use adaptive finite element meshes with linear and quadratic elements, a Newton linearization of the plasticity, active set methods for the contact problem, and multigrid-preconditioned linear solvers. Through a sequence of numerical experiments, we show the performance of these methods. This includes highly accurate solutions of a three-dimensional benchmark problem and scaling our methods in parallel to 1024 cores and more than a billion unknowns.

  9. Large-Scale Portfolio Optimization Using Multiobjective Evolutionary Algorithms and Preselection Methods

    Directory of Open Access Journals (Sweden)

    B. Y. Qu

    2017-01-01

    Full Text Available Portfolio optimization problems involve selection of different assets to invest in order to maximize the overall return and minimize the overall risk simultaneously. The complexity of the optimal asset allocation problem increases with an increase in the number of assets available to select from for investing. The optimization problem becomes computationally challenging when there are more than a few hundreds of assets to select from. To reduce the complexity of large-scale portfolio optimization, two asset preselection procedures that consider return and risk of individual asset and pairwise correlation to remove assets that may not potentially be selected into any portfolio are proposed in this paper. With these asset preselection methods, the number of assets considered to be included in a portfolio can be increased to thousands. To test the effectiveness of the proposed methods, a Normalized Multiobjective Evolutionary Algorithm based on Decomposition (NMOEA/D algorithm and several other commonly used multiobjective evolutionary algorithms are applied and compared. Six experiments with different settings are carried out. The experimental results show that with the proposed methods the simulation time is reduced while return-risk trade-off performances are significantly improved. Meanwhile, the NMOEA/D is able to outperform other compared algorithms on all experiments according to the comparative analysis.

  10. A fast learning method for large scale and multi-class samples of SVM

    Science.gov (United States)

    Fan, Yu; Guo, Huiming

    2017-06-01

    A multi-class classification SVM(Support Vector Machine) fast learning method based on binary tree is presented to solve its low learning efficiency when SVM processing large scale multi-class samples. This paper adopts bottom-up method to set up binary tree hierarchy structure, according to achieved hierarchy structure, sub-classifier learns from corresponding samples of each node. During the learning, several class clusters are generated after the first clustering of the training samples. Firstly, central points are extracted from those class clusters which just have one type of samples. For those which have two types of samples, cluster numbers of their positive and negative samples are set respectively according to their mixture degree, secondary clustering undertaken afterwards, after which, central points are extracted from achieved sub-class clusters. By learning from the reduced samples formed by the integration of extracted central points above, sub-classifiers are obtained. Simulation experiment shows that, this fast learning method, which is based on multi-level clustering, can guarantee higher classification accuracy, greatly reduce sample numbers and effectively improve learning efficiency.

  11. The Roles of Sparse Direct Methods in Large-scale Simulations

    International Nuclear Information System (INIS)

    Li, Xiaoye S.; Gao, Weiguo; Husbands, Parry J.R.; Yang, Chao; Ng, Esmond G.

    2005-01-01

    Sparse systems of linear equations and eigen-equations arise at the heart of many large-scale, vital simulations in DOE. Examples include the Accelerator Science and Technology SciDAC (Omega3P code, electromagnetic problem), the Center for Extended Magnetohydrodynamic Modeling SciDAC(NIMROD and M3D-C1 codes, fusion plasma simulation). The Terascale Optimal PDE Simulations (TOPS)is providing high-performance sparse direct solvers, which have had significant impacts on these applications. Over the past several years, we have been working closely with the other SciDAC teams to solve their large, sparse matrix problems arising from discretization of the partial differential equations. Most of these systems are very ill-conditioned, resulting in extremely poor convergence deployed our direct methods techniques in these applications, which achieved significant scientific results as well as performance gains. These successes were made possible through the SciDAC model of computer scientists and application scientists working together to take full advantage of terascale computing systems and new algorithms research

  12. The Roles of Sparse Direct Methods in Large-scale Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xiaoye S.; Gao, Weiguo; Husbands, Parry J.R.; Yang, Chao; Ng, Esmond G.

    2005-06-27

    Sparse systems of linear equations and eigen-equations arise at the heart of many large-scale, vital simulations in DOE. Examples include the Accelerator Science and Technology SciDAC (Omega3P code, electromagnetic problem), the Center for Extended Magnetohydrodynamic Modeling SciDAC(NIMROD and M3D-C1 codes, fusion plasma simulation). The Terascale Optimal PDE Simulations (TOPS)is providing high-performance sparse direct solvers, which have had significant impacts on these applications. Over the past several years, we have been working closely with the other SciDAC teams to solve their large, sparse matrix problems arising from discretization of the partial differential equations. Most of these systems are very ill-conditioned, resulting in extremely poor convergence deployed our direct methods techniques in these applications, which achieved significant scientific results as well as performance gains. These successes were made possible through the SciDAC model of computer scientists and application scientists working together to take full advantage of terascale computing systems and new algorithms research.

  13. Google Street View as an alternative method to car surveys in large-scale vegetation assessments.

    Science.gov (United States)

    Deus, Ernesto; Silva, Joaquim S; Catry, Filipe X; Rocha, Miguel; Moreira, Francisco

    2015-10-01

    Car surveys (CS) are a common method for assessing the distribution of alien invasive plants. Google Street View (GSV), a free-access web technology where users may experience a virtual travel along roads, has been suggested as a cost-effective alternative to car surveys. We tested if we could replicate the results from a countrywide survey conducted by car in Portugal using GSV as a remote sensing tool, aiming at assessing the distribution of Eucalyptus globulus Labill. wildlings on roadsides adjacent to eucalypt stands. Georeferenced points gathered along CS were used to create road transects visible as lines overlapping the road in GSV environment, allowing surveying the same sampling areas using both methods. This paper presents the results of the comparison between the two methods. Both methods produced similar models of plant abundance, selecting the same explanatory variables, in the same hierarchical order of importance and depicting a similar influence on plant abundance. Even though the GSV model had a lower performance and the GSV survey detected fewer plants, additional variables collected exclusively with GSV improved model performance and provided a new insight into additional factors influencing plant abundance. The survey using GSV required ca. 9 % of the funds and 62 % of the time needed to accomplish the CS. We conclude that GSV may be a cost-effective alternative to CS. We discuss some advantages and limitations of GSV as a survey method. We forecast that GSV may become a widespread tool in road ecology, particularly in large-scale vegetation assessments.

  14. A new method to determine large scale structure from the luminosity distance

    International Nuclear Information System (INIS)

    Romano, Antonio Enea; Chiang, Hsu-Wen; Chen, Pisin

    2014-01-01

    The luminosity distance can be used to determine the properties of large scale structure around the observer. To this purpose we develop a new inversion method to map luminosity distance to a Lemaitre–Tolman–Bondi (LTB) metric based on the use of the exact analytical solution for Einstein equations. The main advantages of this approach are an improved numerical accuracy and stability, an exact analytical setting of the initial conditions for the differential equations which need to be solved and the validity for any sign of the functions determining the LTB geometry. Given the fully analytical form of the differential equations, this method also simplifies the calculation of the red-shift expansion around the apparent horizon point where the numerical solution becomes unstable. We test the method by inverting the supernovae Ia luminosity distance function corresponding to the best fit ΛCDM model. We find that only a limited range of initial conditions is compatible with observations, or a transition from red to blue-shift can occur at relatively low red-shift. Despite LTB solutions without a cosmological constant have been shown not to be compatible with all different set of available observational data, those studies normally fit data assuming a special functional ansatz for the inhomogeneity profile, which often depend only on few parameters. Inversion methods on the contrary are able to fully explore the freedom in fixing the functions which determine a LTB solution. Another important possible application is not about LTB solutions as cosmological models, but rather as tools to study the effects on the observations made by a generic observer located in an inhomogeneous region of the Universe where a fully non perturbative treatment involving exact solutions of Einstein equations is required. (paper)

  15. Large scale inverse problems computational methods and applications in the earth sciences

    CERN Document Server

    Scheichl, Robert; Freitag, Melina A; Kindermann, Stefan

    2013-01-01

    This book is thesecond volume of three volume series recording the ""Radon Special Semester 2011 on Multiscale Simulation & Analysis in Energy and the Environment"" taking place in Linz, Austria, October 3-7, 2011. The volume addresses the common ground in the mathematical and computational procedures required for large-scale inverse problems and data assimilation in forefront applications.

  16. Improvement of methods for large scale sequencing; application to human Xq28

    Energy Technology Data Exchange (ETDEWEB)

    Gibbs, R.A.; Andersson, B.; Wentland, M.A. [Baylor College of Medicine, Houston, TX (United States)] [and others

    1994-09-01

    Sequencing of a one-metabase region of Xq28, spanning the FRAXA and IDS loci has been undertaken in order to investigate the practicality of the shotgun approach for large scale sequencing and as a platform to develop improved methods. The efficiency of several steps in the shotgun sequencing strategy has been increased using PCR-based approaches. An improved method for preparation of M13 libraries has been developed. This protocol combines a previously described adaptor-based protocol with the uracil DNA glycosylase (UDG)-cloning procedure. The efficiency of this procedure has been found to be up to 100-fold higher than that of previously used protocols. In addition the novel protocol is more reliable and thus easy to establish in a laboratory. The method has also been adapted for the simultaneous shotgun sequencing of multiple short fragments by concentrating them before library construction is presented. This protocol is suitable for rapid characterization of cDNA clones. A library was constructed from 15 PCR-amplified and concentrated human cDNA inserts, and the insert sequences could easily be identified as separate contigs during the assembly process and the sequence coverage was even along each fragment. Using this strategy, the fine structures of the FraxA and IDS loci have been revealed and several EST homologies indicating novel expressed sequences have been identified. Use of PCR to close repetitive regions that are difficult to clone was tested by determination of the sequence of a cosmid mapping DXS455 in Xq28, containing a polymorphic VNTR. The region containing the VNTR was not represented in the shotgun library, but by designing PCR primers in the sequences flanking the gap and by cloning and sequencing the PCR product, the fine structure of the VNTR has been determined. It was found to be an AT-rich VNTR with a repeated 25-mer at the center.

  17. III. FROM SMALL TO BIG: METHODS FOR INCORPORATING LARGE SCALE DATA INTO DEVELOPMENTAL SCIENCE.

    Science.gov (United States)

    Davis-Kean, Pamela E; Jager, Justin

    2017-06-01

    For decades, developmental science has been based primarily on relatively small-scale data collections with children and families. Part of the reason for the dominance of this type of data collection is the complexity of collecting cognitive and social data on infants and small children. These small data sets are limited in both power to detect differences and the demographic diversity to generalize clearly and broadly. Thus, in this chapter we will discuss the value of using existing large-scale data sets to tests the complex questions of child development and how to develop future large-scale data sets that are both representative and can answer the important questions of developmental scientists. © 2017 The Society for Research in Child Development, Inc.

  18. A Novel Yeast Surface Display Method for Large-Scale Screen Inhibitors of Sortase A.

    Science.gov (United States)

    Wu, Lin; Li, Huijun; Tang, Tianle

    2017-01-24

    high-throughput analysis, but the conventional method is much more sensitive. The method described in this paper is expected to lead to large-scale screening of sortase A inhibitors which can be used to decrease the risk of drug resistance development.

  19. A Novel Yeast Surface Display Method for Large-Scale Screen Inhibitors of Sortase A

    Directory of Open Access Journals (Sweden)

    Lin Wu

    2017-01-01

    suitable for high-throughput analysis, but the conventional method is much more sensitive. The method described in this paper is expected to lead to large-scale screening of sortase A inhibitors which can be used to decrease the risk of drug resistance development.

  20. An optimal beam alignment method for large-scale distributed space surveillance radar system

    Science.gov (United States)

    Huang, Jian; Wang, Dongya; Xia, Shuangzhi

    2018-06-01

    Large-scale distributed space surveillance radar is a very important ground-based equipment to maintain a complete catalogue for Low Earth Orbit (LEO) space debris. However, due to the thousands of kilometers distance between each sites of the distributed radar system, how to optimally implement the Transmitting/Receiving (T/R) beams alignment in a great space using the narrow beam, which proposed a special and considerable technical challenge in the space surveillance area. According to the common coordinate transformation model and the radar beam space model, we presented a two dimensional projection algorithm for T/R beam using the direction angles, which could visually describe and assess the beam alignment performance. Subsequently, the optimal mathematical models for the orientation angle of the antenna array, the site location and the T/R beam coverage are constructed, and also the beam alignment parameters are precisely solved. At last, we conducted the optimal beam alignment experiments base on the site parameters of Air Force Space Surveillance System (AFSSS). The simulation results demonstrate the correctness and effectiveness of our novel method, which can significantly stimulate the construction for the LEO space debris surveillance equipment.

  1. Development of a vacuum leak test method for large-scale superconducting magnet test facilities

    International Nuclear Information System (INIS)

    Kawano, Katsumi; Hamada, Kazuya; Okuno, Kiyoshi; Kato, Takashi

    2006-01-01

    Japan Atomic Energy Agency (JAEA) has developed leak detection technology for liquid helium temperature experiments in large-scale superconducting magnet test facilities. In JAEA, a cryosorption pump that uses an absorbent cooled by liquid nitrogen with a conventional helium leak detector, is used to detect helium gas that is leaking from pressurized welded joints of pipes and valves in a vacuum chamber. The cryosorption pump plays the role of decreasing aerial components, such as water, nitrogen and oxygen, to increase the sensitivity of helium leak detection. The established detection sensitivity for helium leak testing is 10 -10 to 10 -9 Pam 3 /s. A total of 850 welded and mechanical joints inside the cryogenic test facility for the ITER Central Solenoid Model Coil (CSMC) experiments have been tested. In the test facility, 73 units of glass fiber-reinforced plastic (GFRP) insulation break are used. The amount of helium permeation through the GFRP was recorded during helium leak testing. To distinguish helium leaks from insulation-break permeation, the helium permeation characteristic of the GFRP part was measured as a function of the time of helium charging. Helium permeation was absorbed at 6 h after helium charging, and the detected permeation is around 10 -7 Pam 3 /s. Using the helium leak test method developed, CSMC experiments have been successfully completed. (author)

  2. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    OpenAIRE

    Wang Hao; Gao Wen; Huang Qingming; Zhao Feng

    2010-01-01

    Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matchin...

  3. Non-destructive measurement methods for large scale gaseous diffusion process equipment

    International Nuclear Information System (INIS)

    Mayer, R.L.; Hagenauer, R.C.; McGinnis, B.R.

    1994-01-01

    Two measurement methods have been developed to measure non-destructively uranium hold-up in gaseous diffusion plants. These methods include passive neutron and passive γ ray measurements. An additional method, high resolution γ ray spectroscopy, provides supplementary information about additional γ ray emitting isotopes, γ ray correction factors, 235 U/ 234 U ratios and 235 U enrichment. Many of these methods can be used as a general purpose measurement technique for large containers of uranium. Measurement applications for these methods include uranium hold-up, waste measurements, criticality safety and nuclear accountability

  4. A method for developing a large-scale sediment yield index for European river basins

    Energy Technology Data Exchange (ETDEWEB)

    Delmas, Magalie; Cerdan, Olivier; Garcin, Manuel [BRGM ARN/ESL, Orleans (France); Mouchel, Jean-Marie [UMR Sisyphe, Univ. P and M Curie, Paris (France)

    2009-12-15

    Background, aim, and scope: Sediment fluxes within continental areas play a major role in biogeochemical cycles and are often the cause of soil surface degradation as well as water and ecosystem pollution. In a situation where a high proportion of the land surface is experiencing significant global land use and climate changes, it appears important to establish sediment budgets considering the major processes forcing sediment redistribution within drainage areas. In this context, the aim of this study is to test a methodology to estimate a sediment yield index at a large spatial resolution for European river basins. Data and methods: Four indicators representing processes respectively considered as sources (mass movement and hillslope erosion), sinks (deposits), and transfers of sediments (drainage density) are defined using distributed data. Using these indicators we propose a basic conceptual approach to test the possibility of explaining sediment yield observed at the outlet of 29 selected European river basins. We propose an index which adds the two sources and transfers, and subsequently subtracts the sink term. This index is then compared to observed sediment yield data. Results: With this approach, variability between river basins is observed and the evolution of each indicator analyzed. A linear regression shows a correlation coefficient of 0.83 linking observed specific sediment yield (SSY) with the SSY index. Discussion: To improve this approach at this large river basin scale, basin classification is further refined using the relation between the observed SSY and the index obtained from the four indicators. It allows a refinement of the results. Conclusions: This study presents a conceptual approach offering the advantages of using spatially distributed data combined with major sediment redistribution processes to estimate the sediment yield observed at the outlet of river basins. Recommendations and perspectives: Inclusion of better information on

  5. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  6. Methods for the reconstruction of large scale anisotropies of the cosmic ray flux

    Energy Technology Data Exchange (ETDEWEB)

    Over, Sven

    2010-01-15

    In cosmic ray experiments the arrival directions, among other properties, of cosmic ray particles from detected air shower events are reconstructed. The question of uniformity in the distribution of arrival directions is of large importance for models that try to explain cosmic radiation. In this thesis, methods for the reconstruction of parameters of a dipole-like flux distribution of cosmic rays from a set of recorded air shower events are studied. Different methods are presented and examined by means of detailed Monte Carlo simulations. Particular focus is put on the implications of spurious experimental effects. Modifications of existing methods and new methods are proposed. The main goal of this thesis is the development of the horizontal Rayleigh analysis method. Unlike other methods, this method is based on the analysis of local viewing directions instead of global sidereal directions. As a result, the symmetries of the experimental setup can be better utilised. The calculation of the sky coverage (exposure function) is not necessary in this analysis. The performance of the method is tested by means of further Monte Carlo simulations. The new method performs similarly good or only marginally worse than established methods in case of ideal measurement conditions. However, the simulation of certain experimental effects can cause substantial misestimations of the dipole parameters by the established methods, whereas the new method produces no systematic deviations. The invulnerability to certain effects offers additional advantages, as certain data selection cuts become dispensable. (orig.)

  7. Large-scale photochemical reactions of nanocrystalline suspensions: a promising green chemistry method.

    Science.gov (United States)

    Veerman, Marcel; Resendiz, Marino J E; Garcia-Garibay, Miguel A

    2006-06-08

    Photochemical reactions in the solid state can be scaled up from a few milligrams to 10 grams by using colloidal suspensions of a photoactive molecular crystal prepared by the solvent shift method. Pure products are recovered by filtration, and the use of H(2)O as a suspension medium makes this method a very attractive one from a green chemistry perspective. Using the photodecarbonylation of dicumyl ketone (DCK) as a test system, we show that reaction efficiencies in colloidal suspensions rival those observed in solution. [reaction: see text

  8. An efficient method based on the uniformity principle for synthesis of large-scale heat exchanger networks

    International Nuclear Information System (INIS)

    Zhang, Chunwei; Cui, Guomin; Chen, Shang

    2016-01-01

    Highlights: • Two dimensionless uniformity factors are presented to heat exchange network. • The grouping of process streams reduces the computational complexity of large-scale HENS problems. • The optimal sub-network can be obtained by Powell particle swarm optimization algorithm. • The method is illustrated by a case study involving 39 process streams, with a better solution. - Abstract: The optimal design of large-scale heat exchanger networks is a difficult task due to the inherent non-linear characteristics and the combinatorial nature of heat exchangers. To solve large-scale heat exchanger network synthesis (HENS) problems, two dimensionless uniformity factors to describe the heat exchanger network (HEN) uniformity in terms of the temperature difference and the accuracy of process stream grouping are deduced. Additionally, a novel algorithm that combines deterministic and stochastic optimizations to obtain an optimal sub-network with a suitable heat load for a given group of streams is proposed, and is named the Powell particle swarm optimization (PPSO). As a result, the synthesis of large-scale heat exchanger networks is divided into two corresponding sub-parts, namely, the grouping of process streams and the optimization of sub-networks. This approach reduces the computational complexity and increases the efficiency of the proposed method. The robustness and effectiveness of the proposed method are demonstrated by solving a large-scale HENS problem involving 39 process streams, and the results obtained are better than those previously published in the literature.

  9. A primal-dual interior point method for large-scale free material optimization

    DEFF Research Database (Denmark)

    Weldeyesus, Alemseged Gebrehiwot; Stolpe, Mathias

    2015-01-01

    Free Material Optimization (FMO) is a branch of structural optimization in which the design variable is the elastic material tensor that is allowed to vary over the design domain. The requirements are that the material tensor is symmetric positive semidefinite with bounded trace. The resulting...... optimization problem is a nonlinear semidefinite program with many small matrix inequalities for which a special-purpose optimization method should be developed. The objective of this article is to propose an efficient primal-dual interior point method for FMO that can robustly and accurately solve large...... of iterations the interior point method requires is modest and increases only marginally with problem size. The computed optimal solutions obtain a higher precision than other available special-purpose methods for FMO. The efficiency and robustness of the method is demonstrated by numerical experiments on a set...

  10. A Feature Selection Method for Large-Scale Network Traffic Classification Based on Spark

    Directory of Open Access Journals (Sweden)

    Yong Wang

    2016-02-01

    Full Text Available Currently, with the rapid increasing of data scales in network traffic classifications, how to select traffic features efficiently is becoming a big challenge. Although a number of traditional feature selection methods using the Hadoop-MapReduce framework have been proposed, the execution time was still unsatisfactory with numeral iterative computations during the processing. To address this issue, an efficient feature selection method for network traffic based on a new parallel computing framework called Spark is proposed in this paper. In our approach, the complete feature set is firstly preprocessed based on Fisher score, and a sequential forward search strategy is employed for subsets. The optimal feature subset is then selected using the continuous iterations of the Spark computing framework. The implementation demonstrates that, on the precondition of keeping the classification accuracy, our method reduces the time cost of modeling and classification, and improves the execution efficiency of feature selection significantly.

  11. A Simple Sampling Method for Estimating the Accuracy of Large Scale Record Linkage Projects.

    Science.gov (United States)

    Boyd, James H; Guiver, Tenniel; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Anderson, Phil; Dickinson, Teresa

    2016-05-17

    Record linkage techniques allow different data collections to be brought together to provide a wider picture of the health status of individuals. Ensuring high linkage quality is important to guarantee the quality and integrity of research. Current methods for measuring linkage quality typically focus on precision (the proportion of incorrect links), given the difficulty of measuring the proportion of false negatives. The aim of this work is to introduce and evaluate a sampling based method to estimate both precision and recall following record linkage. In the sampling based method, record-pairs from each threshold (including those below the identified cut-off for acceptance) are sampled and clerically reviewed. These results are then applied to the entire set of record-pairs, providing estimates of false positives and false negatives. This method was evaluated on a synthetically generated dataset, where the true match status (which records belonged to the same person) was known. The sampled estimates of linkage quality were relatively close to actual linkage quality metrics calculated for the whole synthetic dataset. The precision and recall measures for seven reviewers were very consistent with little variation in the clerical assessment results (overall agreement using the Fleiss Kappa statistics was 0.601). This method presents as a possible means of accurately estimating matching quality and refining linkages in population level linkage studies. The sampling approach is especially important for large project linkages where the number of record pairs produced may be very large often running into millions.

  12. Semi-Automated Air-Coupled Impact-Echo Method for Large-Scale Parkade Structure

    Directory of Open Access Journals (Sweden)

    Tyler Epp

    2018-03-01

    Full Text Available Structural Health Monitoring (SHM has moved to data-dense systems, utilizing numerous sensor types to monitor infrastructure, such as bridges and dams, more regularly. One of the issues faced in this endeavour is the scale of the inspected structures and the time it takes to carry out testing. Installing automated systems that can provide measurements in a timely manner is one way of overcoming these obstacles. This study proposes an Artificial Neural Network (ANN application that determines intact and damaged locations from a small training sample of impact-echo data, using air-coupled microphones from a reinforced concrete beam in lab conditions and data collected from a field experiment in a parking garage. The impact-echo testing in the field is carried out in a semi-autonomous manner to expedite the front end of the in situ damage detection testing. The use of an ANN removes the need for a user-defined cutoff value for the classification of intact and damaged locations when a least-square distance approach is used. It is postulated that this may contribute significantly to testing time reduction when monitoring large-scale civil Reinforced Concrete (RC structures.

  13. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    Science.gov (United States)

    Zhao, Feng; Huang, Qingming; Wang, Hao; Gao, Wen

    2010-12-01

    Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.

  14. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    Directory of Open Access Journals (Sweden)

    Wang Hao

    2010-01-01

    Full Text Available Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.

  15. Analysis Methods for Extracting Knowledge from Large-Scale WiFi Monitoring to Inform Building Facility Planning

    DEFF Research Database (Denmark)

    Ruiz-Ruiz, Antonio; Blunck, Henrik; Prentow, Thor Siiger

    2014-01-01

    realistic data to inform facility planning. In this paper, we propose analysis methods to extract knowledge from large sets of network collected WiFi traces to better inform facility management and planning in large building complexes. The analysis methods, which build on a rich set of temporal and spatial......The optimization of logistics in large building com- plexes with many resources, such as hospitals, require realistic facility management and planning. Current planning practices rely foremost on manual observations or coarse unverified as- sumptions and therefore do not properly scale or provide....... Spatio-temporal visualization tools built on top of these methods enable planners to inspect and explore extracted information to inform facility-planning activities. To evaluate the methods, we present results for a large hospital complex covering more than 10 hectares. The evaluation is based on Wi...

  16. Constructing sites at a large scale - towards new design (education) methods

    DEFF Research Database (Denmark)

    Braae, Ellen Marie; Tietjen, Anne

    2010-01-01

    of the design disciplines within the development of our urban landscapes. At the same time, urban and landscape designers are confronted with new methodological problems. Within a strategic transformation perspective the formulation of the design problem or brief becomes an integrated part of the design process......Since the 1990s the regional scale has regained importance in urban and landscape design. In parallel, the focus in design tasks has shifted from master plans for urban extension to strategic urban transformation projects. The current paradigm of planning by projects reinforces the role....... This paper discusses new design (education) methods based on a relational concept of urban sites and design processes using the actor-network-theory as theoretical frame....

  17. Large Scale Solar Heating

    DEFF Research Database (Denmark)

    Heller, Alfred

    2001-01-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out...... model is designed and validated on the Marstal case. Applying the Danish Reference Year, a design tool is presented. The simulation tool is used for proposals for application of alternative designs, including high-performance solar collector types (trough solar collectors, vaccum pipe collectors......). Simulation programs are proposed as control supporting tool for daily operation and performance prediction of central solar heating plants. Finaly the CSHP technolgy is put into persepctive with respect to alternatives and a short discussion on the barries and breakthrough of the technology are given....

  18. Complementary methods to study glasses and melts at large scale facilities

    International Nuclear Information System (INIS)

    Leydier, M.

    2010-01-01

    In this work, large scale facilities (neutron and synchrotron sources) were used for studying the structure and dynamic of disordered materials (liquids and glasses). In particular, three studies are presented. The first is a structural study of Ln 2 O 3 -Al 2 O 3 -SiO 2 glasses where Ln represents the cations Sc, Y and La. We combined the results obtained from x-ray and neutron diffraction and x-ray absorption experiments. This work is focused on the determination of the interatomic distances and coordination numbers for the three pairs Si-O, Al-O and Ln-O. The second is a study of the iron oxide FeO in the liquid state. Photoemission experiments at the iron absorption edge were associated with x-ray and neutron diffraction measurements. The results obtained made it possible to define a consistent structural model for liquid FeO. The third is a study of the dynamics in CaAl 2 O 4 melts. From inelastic x-ray scattering experiments, it was possible to determine the apparent and isothermal sound velocities as well as the longitudinal viscosity. These measurements were complemented by quasielastic neutron scattering experiments from which atomic diffusion coefficients were determined. This work shows the interest of combining various experimental techniques for studying glasses and melts and points out the need to associate also modelling techniques such as molecular dynamics simulations. (author)

  19. Efficient seed-mediated method for the large-scale synthesis of Au nanorods

    International Nuclear Information System (INIS)

    Ahmed, Waqqar; Bhatti, Arshad Saleem; Ruitenbeek, Jan M. van

    2017-01-01

    Seed-mediated methods are widely followed for the synthesis of Au nanorods (NRs). However, mostly dilute concentrations of the Au precursor (HAuCl_4) are used in the growth solution, which leads to a low final concentration of NRs. Attempts of increasing the concentration of NRs by simply increasing the concentration of HAuCl_4, other reagents in the growth solution and seeds lead to a faster growth kinetics which is not favourable for NR growth. Herein, we demonstrate that the increase in growth kinetics for high concentrations of reagents in growth solution can be neutralised by decreasing the pH of the solution. The synthesis of the NRs can be scaled up by using higher concentrations of reagents and adding an optimum concentration of HCl in the growth solution. The concentration of HAuCl_4 in the growth solution can be increased up to 5 mM, and 10–20 times more NRs can be synthesised for the same reaction volume compared to that of the conventional seed-mediated method. We have also noticed that a cetyltrimethylammonium bromide (CTAB)-to-HAuCl_4 molar ratio of 50 is sufficient for obtaining high yield of NRs.

  20. Efficient seed-mediated method for the large-scale synthesis of Au nanorods

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed, Waqqar; Bhatti, Arshad Saleem [COMSATS Institute of Information Technology, Department of Physics (Pakistan); Ruitenbeek, Jan M. van, E-mail: Ruitenbeek@physics.leidenuniv.nl [Leiden University, Huygens-Kamerlingh Onnes Laboratory (Netherlands)

    2017-03-15

    Seed-mediated methods are widely followed for the synthesis of Au nanorods (NRs). However, mostly dilute concentrations of the Au precursor (HAuCl{sub 4}) are used in the growth solution, which leads to a low final concentration of NRs. Attempts of increasing the concentration of NRs by simply increasing the concentration of HAuCl{sub 4}, other reagents in the growth solution and seeds lead to a faster growth kinetics which is not favourable for NR growth. Herein, we demonstrate that the increase in growth kinetics for high concentrations of reagents in growth solution can be neutralised by decreasing the pH of the solution. The synthesis of the NRs can be scaled up by using higher concentrations of reagents and adding an optimum concentration of HCl in the growth solution. The concentration of HAuCl{sub 4} in the growth solution can be increased up to 5 mM, and 10–20 times more NRs can be synthesised for the same reaction volume compared to that of the conventional seed-mediated method. We have also noticed that a cetyltrimethylammonium bromide (CTAB)-to-HAuCl{sub 4} molar ratio of 50 is sufficient for obtaining high yield of NRs.

  1. Large Scale Leach Test Facility: Development of equipment and methods, and comparison to MCC-1 leach tests

    International Nuclear Information System (INIS)

    Pellarin, D.J.; Bickford, D.F.

    1985-01-01

    This report describes the test equipment and methods, and documents the results of the first large-scale MCC-1 experiments in the Large Scale Leach Test Facility (LSLTF). Two experiments were performed using 1-ft-long samples sectioned from the middle of canister MS-11. The leachant used in the experiments was ultrapure deionized water - an aggressive and well characterized leachant providing high sensitivity for liquid sample analyses. All the original test plan objectives have been successfully met. Equipment and procedures have been developed for large-sample-size leach testing. The statistical reliability of the method has been determined, and ''bench mark'' data developed to relate small scale leach testing to full size waste forms. The facility is unique, and provides sampling reliability and flexibility not possible in smaller laboratory scale tests. Future use of this facility should simplify and accelerate the development of leaching models and repository specific data. The factor of less than 3 for leachability, corresponding to a 200,000/1 increase in sample volume, enhances the credibility of small scale test data which precedes this work, and supports the ability of the DWPF waste form to meet repository criteria

  2. An intercomparison of three methods for the large-scale isolation of oceanic dissolved organic matter

    Science.gov (United States)

    Green, Nelson W.; Perdue, E. Michael; Aiken, George R.; Butler, Kenna D.; Chen, Hongmei; Dittmar, Thorsten; Niggemann, Jutta; Stubbins, Aron

    2014-01-01

    Dissolved organic matter (DOM) was isolated from large volumes of deep (674 m) and surface (21 m) ocean water via reverse osmosis/electrodialysis (RO/ED) and two solid-phase extraction (SPE) methods (XAD-8/4 and PPL) at the Natural Energy Laboratory of Hawaii Authority (NELHA). By applying the three methods to common water samples, the efficiencies of XAD, PPL and RO/ED DOM isolation were compared. XAD recovered 42% of dissolved organic carbon (DOC) from deep water (25% with XAD-8; 17% with XAD-4) and 30% from surface water (16% with XAD-8; 14% with XAD-4). PPL recovered 61 ± 3% of DOC from deep water and 61% from surface water. RO/ED recovered 82 ± 3% of DOC from deep water, 14 ± 3% of which was recovered in a sodium hydroxide rinse, and 75 ± 5% of DOC from surface water, with 12 ± 2% in the sodium hydroxide rinse. The highest recoveries of all were achieved by the sequential isolation of DOC, first with PPL and then via RO/ED. This combined technique recovered 98% of DOC from a deep water sample and 101% of DOC from a surface water sample. In total, 1.9, 10.3 and 1.6 g-C of DOC were collected via XAD, PPL and RO/ED, respectively. Rates of DOC recovery using the XAD, PPL and RO/ED methods were 10, 33 and 10 mg-C h− 1, respectively. Based upon C/N ratios, XAD isolates were heavily C-enriched compared with water column DOM, whereas RO/ED and PPL ➔ RO/ED isolate C/N values were most representative of the original DOM. All techniques are suitable for the isolation of large amounts of DOM with purities suitable for most advanced analytical techniques. Coupling PPL and RO/ED techniques may provide substantial progress in the search for a method to quantitatively isolate oceanic DOC, bringing the entirety of the DOM pool within the marine chemist's analytical window.

  3. An accurate and efficient method for large-scale SSR genotyping and applications.

    Science.gov (United States)

    Li, Lun; Fang, Zhiwei; Zhou, Junfei; Chen, Hong; Hu, Zhangfeng; Gao, Lifen; Chen, Lihong; Ren, Sheng; Ma, Hongyu; Lu, Long; Zhang, Weixiong; Peng, Hai

    2017-06-02

    Accurate and efficient genotyping of simple sequence repeats (SSRs) constitutes the basis of SSRs as an effective genetic marker with various applications. However, the existing methods for SSR genotyping suffer from low sensitivity, low accuracy, low efficiency and high cost. In order to fully exploit the potential of SSRs as genetic marker, we developed a novel method for SSR genotyping, named as AmpSeq-SSR, which combines multiplexing polymerase chain reaction (PCR), targeted deep sequencing and comprehensive analysis. AmpSeq-SSR is able to genotype potentially more than a million SSRs at once using the current sequencing techniques. In the current study, we simultaneously genotyped 3105 SSRs in eight rice varieties, which were further validated experimentally. The results showed that the accuracies of AmpSeq-SSR were nearly 100 and 94% with a single base resolution for homozygous and heterozygous samples, respectively. To demonstrate the power of AmpSeq-SSR, we adopted it in two applications. The first was to construct discriminative fingerprints of the rice varieties using 3105 SSRs, which offer much greater discriminative power than the 48 SSRs commonly used for rice. The second was to map Xa21, a gene that confers persistent resistance to rice bacterial blight. We demonstrated that genome-scale fingerprints of an organism can be efficiently constructed and candidate genes, such as Xa21 in rice, can be accurately and efficiently mapped using an innovative strategy consisting of multiplexing PCR, targeted sequencing and computational analysis. While the work we present focused on rice, AmpSeq-SSR can be readily extended to animals and micro-organisms. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. Synthesis of Large-Scale Single-Crystalline Monolayer WS2 Using a Semi-Sealed Method

    Directory of Open Access Journals (Sweden)

    Feifei Lan

    2018-02-01

    Full Text Available As a two-dimensional semiconductor, WS2 has attracted great attention due to its rich physical properties and potential applications. However, it is still difficult to synthesize monolayer single-crystalline WS2 at larger scale. Here, we report the growth of large-scale triangular single-crystalline WS2 with a semi-sealed installation by chemical vapor deposition (CVD. Through this method, triangular single-crystalline WS2 with an average length of more than 300 µm was obtained. The largest one was about 405 μm in length. WS2 triangles with different sizes and thicknesses were analyzed by optical microscope and atomic force microscope (AFM. Their optical properties were evaluated by Raman and photoluminescence (PL spectra. This report paves the way to fabricating large-scale single-crystalline monolayer WS2, which is useful for the growth of high-quality WS2 and its potential applications in the future.

  5. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  6. The method of measurement and synchronization control for large-scale complex loading system

    International Nuclear Information System (INIS)

    Liao Min; Li Pengyuan; Hou Binglin; Chi Chengfang; Zhang Bo

    2012-01-01

    With the development of modern industrial technology, measurement and control system was widely used in high precision, complex industrial control equipment and large-tonnage loading device. The measurement and control system is often used to analyze the distribution of stress and displacement in the complex bearing load or the complex nature of the mechanical structure itself. In ITER GS mock-up with 5 flexible plates, for each load combination, detect and measure potential slippage between the central flexible plate and the neighboring spacers is necessary as well as the potential slippage between each pre-stressing bar and its neighboring plate. The measurement and control system consists of seven sets of EDC controller and board, computer system, 16-channel quasi-dynamic strain gauge, 25 sets of displacement sensors, 7 sets of load and displacement sensors in the cylinders. This paper demonstrates the principles and methods of EDC220 digital controller to achieve synchronization control, and R and D process of multi-channel loading control software and measurement software. (authors)

  7. Hybrid precoding based on matrix-adaptive method for multiuser large-scale antenna arrays.

    Directory of Open Access Journals (Sweden)

    Yongpan Feng

    Full Text Available Massive multiple-input multiple-output (MIMO is envisioned to offer a considerable improvement in capacity, but it has a high cost and the radio frequency (RF chain components have a high power consumption at high frequency. To address this problem, a hybrid analog and digital precoding scheme has been studied recently, which restricts the number of RF chains to far less than the number of antenna elements. In this paper, we consider the downlink communication of a massive multiuser multiple-input single-output (MU-MISO system and propose an iterative hybrid precoding algorithm to approach the capacity performance of the traditional full digital precoding scheme. We aim to attain a large baseband gain by zero-forcing (ZF digital precoding on the equivalent channel and then minimize the total power to obtain the optimal RF precoder. Simulation results show that the proposed method can approach the performance of the conventional fully digital precoding with a low computational complexity.

  8. A modified Generalized Least Squares method for large scale nuclear data evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Schnabel, Georg [Irfu, CEA, Université Paris-Saclay, 91191 Gif-sur-Yvette (France); Atominstitut, TU Wien, Vienna (Austria); Leeb, Helmut [Atominstitut, TU Wien, Vienna (Austria)

    2017-01-01

    Nuclear data evaluation aims to provide estimates and uncertainties in the form of covariance matrices of cross sections and related quantities. Many practitioners use the Generalized Least Squares (GLS) formulas to combine experimental data and results of model calculations in order to determine reliable estimates and covariance matrices. A prerequisite to apply the GLS formulas is the construction of a prior covariance matrix for the observables from a set of model calculations. Modern nuclear model codes are able to provide predictions for a large number of observables. However, the inclusion of all observables may lead to a prior covariance matrix of intractable size. Therefore, we introduce mathematically equivalent versions of the GLS formulas to avoid the construction of the prior covariance matrix. Experimental data can be incrementally incorporated into the evaluation process, hence there is no upper limit on their amount. We demonstrate the modified GLS method in a tentative evaluation involving about three million observables using the code TALYS. The revised scheme is well suited as building block of a database application providing evaluated nuclear data. Updating with new experimental data is feasible and users can query estimates and correlations of arbitrary subsets of the observables stored in the database.

  9. APPLICATIONS OF CFD METHOD TO GAS MIXING ANALYSIS IN A LARGE-SCALED TANK

    International Nuclear Information System (INIS)

    Lee, S; Richard Dimenna, R

    2007-01-01

    The computational fluid dynamics (CFD) modeling technique was applied to the estimation of maximum benzene concentration for the vapor space inside a large-scaled and high-level radioactive waste tank at Savannah River site (SRS). The objective of the work was to perform the calculations for the benzene mixing behavior in the vapor space of Tank 48 and its impact on the local concentration of benzene. The calculations were used to evaluate the degree to which purge air mixes with benzene evolving from the liquid surface and its ability to prevent an unacceptable concentration of benzene from forming. The analysis was focused on changing the tank operating conditions to establish internal recirculation and changing the benzene evolution rate from the liquid surface. The model used a three-dimensional momentum coupled with multi-species transport. The calculations included potential operating conditions for air inlet and exhaust flows, recirculation flow rate, and benzene evolution rate with prototypic tank geometry. The flow conditions are assumed to be fully turbulent since Reynolds numbers for typical operating conditions are in the range of 20,000 to 70,000 based on the inlet conditions of the air purge system. A standard two-equation turbulence model was used. The modeling results for the typical gas mixing problems available in the literature were compared and verified through comparisons with the test results. The benchmarking results showed that the predictions are in good agreement with the analytical solutions and literature data. Additional sensitivity calculations included a reduced benzene evolution rate, reduced air inlet and exhaust flow, and forced internal recirculation. The modeling results showed that the vapor space was fairly well mixed and that benzene concentrations were relatively low when forced recirculation and 72 cfm ventilation air through the tank boundary were imposed. For the same 72 cfm air inlet flow but without forced recirculation

  10. Backward-in-time methods to simulate large-scale transport and mixing in the ocean

    Science.gov (United States)

    Prants, S. V.

    2015-06-01

    In oceanography and meteorology, it is important to know not only where water or air masses are headed for, but also where they came from as well. For example, it is important to find unknown sources of oil spills in the ocean and of dangerous substance plumes in the atmosphere. It is impossible with the help of conventional ocean and atmospheric numerical circulation models to extrapolate backward from the observed plumes to find the source because those models cannot be reversed in time. We review here recently elaborated backward-in-time numerical methods to identify and study mesoscale eddies in the ocean and to compute where those waters came from to a given area. The area under study is populated with a large number of artificial tracers that are advected backward in time in a given velocity field that is supposed to be known analytically or numerically, or from satellite and radar measurements. After integrating advection equations, one gets positions of each tracer on a fixed day in the past and can identify from known destinations a particle positions at earlier times. The results provided show that the method is efficient, for example, in estimating probabilities to find increased concentrations of radionuclides and other pollutants in oceanic mesoscale eddies. The backward-in-time methods are illustrated in this paper with a few examples. Backward-in-time Lagrangian maps are applied to identify eddies in satellite-derived and numerically generated velocity fields and to document the pathways by which they exchange water with their surroundings. Backward-in-time trapping maps are used to identify mesoscale eddies in the altimetric velocity field with a risk to be contaminated by Fukushima-derived radionuclides. The results of simulations are compared with in situ mesurement of caesium concentration in sea water samples collected in a recent research vessel cruise in the area to the east of Japan. Backward-in-time latitudinal maps and the corresponding

  11. Steady-state analysis of large scale systems : the successive lumping method

    NARCIS (Netherlands)

    Smit, L.C.

    2016-01-01

    The general area of research of this dissertation concerns large systems with random aspects to their behavior that can be modeled and studied in terms of the stationary distribution of Markov chains. As the state spaces of such systems become large, their behavior gets hard to analyze, either via

  12. Large scale organisational intervention to improve patient safety in four UK hospitals: mixed method evaluation

    Science.gov (United States)

    Benning, Amirta; Ghaleb, Maisoon; Suokas, Anu; Dixon-Woods, Mary; Dawson, Jeremy; Barber, Nick; Franklin, Bryony Dean; Girling, Alan; Hemming, Karla; Carmalt, Martin; Rudge, Gavin; Naicker, Thirumalai; Nwulu, Ugochi; Choudhury, Sopna

    2011-01-01

    Objectives To conduct an independent evaluation of the first phase of the Health Foundation’s Safer Patients Initiative (SPI), and to identify the net additional effect of SPI and any differences in changes in participating and non-participating NHS hospitals. Design Mixed method evaluation involving five substudies, before and after design. Setting NHS hospitals in the United Kingdom. Participants Four hospitals (one in each country in the UK) participating in the first phase of the SPI (SPI1); 18 control hospitals. Intervention The SPI1 was a compound (multi-component) organisational intervention delivered over 18 months that focused on improving the reliability of specific frontline care processes in designated clinical specialties and promoting organisational and cultural change. Results Senior staff members were knowledgeable and enthusiastic about SPI1. There was a small (0.08 points on a 5 point scale) but significant (Porganisational climate). Qualitative evidence showed only modest penetration of SPI1 at medical ward level. Although SPI1 was designed to engage staff from the bottom up, it did not usually feel like this to those working on the wards, and questions about legitimacy of some aspects of SPI1 were raised. Of the five components to identify patients at risk of deterioration—monitoring of vital signs (14 items); routine tests (three items); evidence based standards specific to certain diseases (three items); prescribing errors (multiple items from the British National Formulary); and medical history taking (11 items)—there was little net difference between control and SPI1 hospitals, except in relation to quality of monitoring of acute medical patients, which improved on average over time across all hospitals. Recording of respiratory rate increased to a greater degree in SPI1 than in control hospitals; in the second six hours after admission recording increased from 40% (93) to 69% (165) in control hospitals and from 37% (141) to 78% (296

  13. Large scale organisational intervention to improve patient safety in four UK hospitals: mixed method evaluation.

    Science.gov (United States)

    Benning, Amirta; Ghaleb, Maisoon; Suokas, Anu; Dixon-Woods, Mary; Dawson, Jeremy; Barber, Nick; Franklin, Bryony Dean; Girling, Alan; Hemming, Karla; Carmalt, Martin; Rudge, Gavin; Naicker, Thirumalai; Nwulu, Ugochi; Choudhury, Sopna; Lilford, Richard

    2011-02-03

    To conduct an independent evaluation of the first phase of the Health Foundation's Safer Patients Initiative (SPI), and to identify the net additional effect of SPI and any differences in changes in participating and non-participating NHS hospitals. Mixed method evaluation involving five substudies, before and after design. NHS hospitals in the United Kingdom. Four hospitals (one in each country in the UK) participating in the first phase of the SPI (SPI1); 18 control hospitals. The SPI1 was a compound (multi-component) organisational intervention delivered over 18 months that focused on improving the reliability of specific frontline care processes in designated clinical specialties and promoting organisational and cultural change. Senior staff members were knowledgeable and enthusiastic about SPI1. There was a small (0.08 points on a 5 point scale) but significant (P organisational climate). Qualitative evidence showed only modest penetration of SPI1 at medical ward level. Although SPI1 was designed to engage staff from the bottom up, it did not usually feel like this to those working on the wards, and questions about legitimacy of some aspects of SPI1 were raised. Of the five components to identify patients at risk of deterioration--monitoring of vital signs (14 items); routine tests (three items); evidence based standards specific to certain diseases (three items); prescribing errors (multiple items from the British National Formulary); and medical history taking (11 items)--there was little net difference between control and SPI1 hospitals, except in relation to quality of monitoring of acute medical patients, which improved on average over time across all hospitals. Recording of respiratory rate increased to a greater degree in SPI1 than in control hospitals; in the second six hours after admission recording increased from 40% (93) to 69% (165) in control hospitals and from 37% (141) to 78% (296) in SPI1 hospitals (odds ratio for "difference in difference" 2

  14. A New Method Based On Modified Shuffled Frog Leaping Algorithm In Order To Solve Nonlinear Large Scale Problem

    Directory of Open Access Journals (Sweden)

    Aliasghar Baziar

    2015-03-01

    Full Text Available Abstract In order to handle large scale problems this study has used shuffled frog leaping algorithm. This algorithm is an optimization method based on natural memetics that uses a new two-phase modification to it to have a better search in the problem space. The suggested algorithm is evaluated by comparing to some well known algorithms using several benchmark optimization problems. The simulation results have clearly shown the superiority of this algorithm over other well-known methods in the area.

  15. Development and application of a massively parallel KKR Green function method for large scale systems

    Energy Technology Data Exchange (ETDEWEB)

    Thiess, Alexander R.

    2011-12-19

    In this thesis we present the development of the self-consistent, full-potential Korringa-Kohn-Rostoker (KKR) Green function method KKRnano for calculating the electronic properties, magnetic interactions, and total energy including all electrons on the basis of the density functional theory (DFT) on high-end massively parallelized high-performance computers for supercells containing thousands of atoms without sacrifice of accuracy. KKRnano was used for the following two applications. The first application is centered in the field of dilute magnetic semiconductors. In this field a new promising material combination was identified: gadolinium doped gallium nitride which shows ferromagnetic ordering of colossal magnetic moments above room temperature. It quickly turned out that additional extrinsic defects are inducing the striking properties. However, the question which kind of extrinsic defects are present in experimental samples is still unresolved. In order to shed light on this open question, we perform extensive studies of the most promising candidates: interstitial nitrogen and oxygen, as well as gallium vacancies. By analyzing the pairwise magnetic coupling among defects it is shown that nitrogen and oxygen interstitials cannot support thermally stable ferromagnetic order. Gallium vacancies, on the other hand, facilitate an important coupling mechanism. The vacancies are found to induce large magnetic moments on all surrounding nitrogen sites, which then couple ferromagnetically both among themselves and with the gadolinium dopants. Based on a statistical evaluation it can be concluded that already small concentrations of gallium vacancies can lead to a distinct long-range ferromagnetic ordering. Beyond this important finding we present further indications, from which we infer that gallium vacancies likely cause the striking ferromagnetic coupling of colossal magnetic moments in GaN:Gd. The second application deals with the phase-change material germanium

  16. Development and application of a massively parallel KKR Green function method for large scale systems

    International Nuclear Information System (INIS)

    Thiess, Alexander R.

    2011-01-01

    In this thesis we present the development of the self-consistent, full-potential Korringa-Kohn-Rostoker (KKR) Green function method KKRnano for calculating the electronic properties, magnetic interactions, and total energy including all electrons on the basis of the density functional theory (DFT) on high-end massively parallelized high-performance computers for supercells containing thousands of atoms without sacrifice of accuracy. KKRnano was used for the following two applications. The first application is centered in the field of dilute magnetic semiconductors. In this field a new promising material combination was identified: gadolinium doped gallium nitride which shows ferromagnetic ordering of colossal magnetic moments above room temperature. It quickly turned out that additional extrinsic defects are inducing the striking properties. However, the question which kind of extrinsic defects are present in experimental samples is still unresolved. In order to shed light on this open question, we perform extensive studies of the most promising candidates: interstitial nitrogen and oxygen, as well as gallium vacancies. By analyzing the pairwise magnetic coupling among defects it is shown that nitrogen and oxygen interstitials cannot support thermally stable ferromagnetic order. Gallium vacancies, on the other hand, facilitate an important coupling mechanism. The vacancies are found to induce large magnetic moments on all surrounding nitrogen sites, which then couple ferromagnetically both among themselves and with the gadolinium dopants. Based on a statistical evaluation it can be concluded that already small concentrations of gallium vacancies can lead to a distinct long-range ferromagnetic ordering. Beyond this important finding we present further indications, from which we infer that gallium vacancies likely cause the striking ferromagnetic coupling of colossal magnetic moments in GaN:Gd. The second application deals with the phase-change material germanium

  17. Large-Scale Flight Phase Identification from ADS-B Data Using Machine Learning Methods

    NARCIS (Netherlands)

    Sun, J.; Ellerbroek, J.; Hoekstra, J.M.; Lovell, D.; Fricke, H.

    2016-01-01

    With the increasing availability of ADS-B transponders on commercial aircraft, as well as the rapidly growing deployment of ground stations that provide public access to their data, accessing open aircraft flight data is becoming easier for researchers. Given the large number of operational

  18. Systems and methods for large-scale nanotemplate and nanowire fabrication

    KAUST Repository

    Vidal, Enrique Vilanova; Alfadhel, Ahmed; Ivanov, Iurii; Kosel, Jü rgen

    2016-01-01

    Systems and methods for largescale nanotemplate and nanowire fabrication are provided. The system can include a sample holder and one or more chemical containers fluidly connected to the sample holder. The sample holder can be configured to contain

  19. Node-based finite element method for large-scale adaptive fluid analysis in parallel environments

    Energy Technology Data Exchange (ETDEWEB)

    Toshimitsu, Fujisawa [Tokyo Univ., Collaborative Research Center of Frontier Simulation Software for Industrial Science, Institute of Industrial Science (Japan); Genki, Yagawa [Tokyo Univ., Department of Quantum Engineering and Systems Science (Japan)

    2003-07-01

    In this paper, a FEM-based (finite element method) mesh free method with a probabilistic node generation technique is presented. In the proposed method, all computational procedures, from the mesh generation to the solution of a system of equations, can be performed fluently in parallel in terms of nodes. Local finite element mesh is generated robustly around each node, even for harsh boundary shapes such as cracks. The algorithm and the data structure of finite element calculation are based on nodes, and parallel computing is realized by dividing a system of equations by the row of the global coefficient matrix. In addition, the node-based finite element method is accompanied by a probabilistic node generation technique, which generates good-natured points for nodes of finite element mesh. Furthermore, the probabilistic node generation technique can be performed in parallel environments. As a numerical example of the proposed method, we perform a compressible flow simulation containing strong shocks. Numerical simulations with frequent mesh refinement, which are required for such kind of analysis, can effectively be performed on parallel processors by using the proposed method. (authors)

  20. Node-based finite element method for large-scale adaptive fluid analysis in parallel environments

    International Nuclear Information System (INIS)

    Toshimitsu, Fujisawa; Genki, Yagawa

    2003-01-01

    In this paper, a FEM-based (finite element method) mesh free method with a probabilistic node generation technique is presented. In the proposed method, all computational procedures, from the mesh generation to the solution of a system of equations, can be performed fluently in parallel in terms of nodes. Local finite element mesh is generated robustly around each node, even for harsh boundary shapes such as cracks. The algorithm and the data structure of finite element calculation are based on nodes, and parallel computing is realized by dividing a system of equations by the row of the global coefficient matrix. In addition, the node-based finite element method is accompanied by a probabilistic node generation technique, which generates good-natured points for nodes of finite element mesh. Furthermore, the probabilistic node generation technique can be performed in parallel environments. As a numerical example of the proposed method, we perform a compressible flow simulation containing strong shocks. Numerical simulations with frequent mesh refinement, which are required for such kind of analysis, can effectively be performed on parallel processors by using the proposed method. (authors)

  1. SQDFT: Spectral Quadrature method for large-scale parallel O(N) Kohn-Sham calculations at high temperature

    Science.gov (United States)

    Suryanarayana, Phanish; Pratapa, Phanisri P.; Sharma, Abhiraj; Pask, John E.

    2018-03-01

    We present SQDFT: a large-scale parallel implementation of the Spectral Quadrature (SQ) method for O(N) Kohn-Sham Density Functional Theory (DFT) calculations at high temperature. Specifically, we develop an efficient and scalable finite-difference implementation of the infinite-cell Clenshaw-Curtis SQ approach, in which results for the infinite crystal are obtained by expressing quantities of interest as bilinear forms or sums of bilinear forms, that are then approximated by spatially localized Clenshaw-Curtis quadrature rules. We demonstrate the accuracy of SQDFT by showing systematic convergence of energies and atomic forces with respect to SQ parameters to reference diagonalization results, and convergence with discretization to established planewave results, for both metallic and insulating systems. We further demonstrate that SQDFT achieves excellent strong and weak parallel scaling on computer systems consisting of tens of thousands of processors, with near perfect O(N) scaling with system size and wall times as low as a few seconds per self-consistent field iteration. Finally, we verify the accuracy of SQDFT in large-scale quantum molecular dynamics simulations of aluminum at high temperature.

  2. Simulation of hydrogen release and combustion in large scale geometries: models and methods

    International Nuclear Information System (INIS)

    Beccantini, A.; Dabbene, F.; Kudriakov, S.; Magnaud, J.P.; Paillere, H.; Studer, E.

    2003-01-01

    The simulation of H2 distribution and combustion in confined geometries such as nuclear reactor containments is a challenging task from the point of view of numerical simulation, as it involves quite disparate length and time scales, which need to resolved appropriately and efficiently. Cea is involved in the development and validation of codes to model such problems, for external clients such as IRSN (TONUS code), Technicatome (NAUTILUS code) or for its own safety studies. This paper provides an overview of the physical and numerical models developed for such applications, as well as some insight into the current research topics which are being pursued. Examples of H2 mixing and combustion simulations are given. (authors)

  3. A new method of presentation the large-scale magnetic field structure on the Sun and solar corona

    Science.gov (United States)

    Ponyavin, D. I.

    1995-01-01

    The large-scale photospheric magnetic field, measured at Stanford, has been analyzed in terms of surface harmonics. Changes of the photospheric field which occur within whole solar rotation period can be resolved by this analysis. For this reason we used daily magnetograms of the line-of-sight magnetic field component observed from Earth over solar disc. We have estimated the period during which day-to-day full disc magnetograms must be collected. An original algorithm was applied to resolve time variations of spherical harmonics that reflect time evolution of large-scale magnetic field within solar rotation period. This method of magnetic field presentation can be useful enough in lack of direct magnetograph observations due to sometimes bad weather conditions. We have used the calculated surface harmonics to reconstruct the large-scale magnetic field structure on the source surface near the sun - the origin of heliospheric current sheet and solar wind streams. The obtained results have been compared with spacecraft in situ observations and geomagnetic activity. We tried to show that proposed technique can trace shon-time variations of heliospheric current sheet and short-lived solar wind streams. We have compared also our results with those obtained traditionally from potential field approximation and extrapolation using synoptic charts as initial boundary conditions.

  4. Systems and methods for large-scale nanotemplate and nanowire fabrication

    KAUST Repository

    Vidal, Enrique Vilanova

    2016-03-31

    Systems and methods for largescale nanotemplate and nanowire fabrication are provided. The system can include a sample holder and one or more chemical containers fluidly connected to the sample holder. The sample holder can be configured to contain a solution and to releasably hold a substrate material within the solution. In other aspects, the system can include a robotic arm including a head configured to releasably hold a substrate material. The methods can include initiating a treatment step by moving a chemical solution from a chemical container to the sample holder to submerge the substrate material for a period of time. The methods can include moving the robotic arm to position the substrate in a chemical container. The treatment steps can be stopped by removing the chemical solution from the sample holder or by moving the robotic arm to remove the substrate from the chemical container. The treatment steps can include degreasing, polishing, rinsing, anodization, and deposition.

  5. Large-scale validation of methods for cytotoxic T-lymphocyte epitope prediction

    DEFF Research Database (Denmark)

    Larsen, Mette Voldby; Lundegaard, Claus; Lamberth, K.

    2007-01-01

    BACKGROUND: Reliable predictions of Cytotoxic T lymphocyte (CTL) epitopes are essential for rational vaccine design. Most importantly, they can minimize the experimental effort needed to identify epitopes. NetCTL is a web-based tool designed for predicting human CTL epitopes in any given protein....... of the other methods achieved a sensitivity of 0.64. The NetCTL-1.2 method is available at http://www.cbs.dtu.dk/services/NetCTL.All used datasets are available at http://www.cbs.dtu.dk/suppl/immunology/CTL-1.2.php....

  6. Parallel Dynamic Analysis of a Large-Scale Water Conveyance Tunnel under Seismic Excitation Using ALE Finite-Element Method

    Directory of Open Access Journals (Sweden)

    Xiaoqing Wang

    2016-01-01

    Full Text Available Parallel analyses about the dynamic responses of a large-scale water conveyance tunnel under seismic excitation are presented in this paper. A full three-dimensional numerical model considering the water-tunnel-soil coupling is established and adopted to investigate the tunnel’s dynamic responses. The movement and sloshing of the internal water are simulated using the multi-material Arbitrary Lagrangian Eulerian (ALE method. Nonlinear fluid–structure interaction (FSI between tunnel and inner water is treated by using the penalty method. Nonlinear soil-structure interaction (SSI between soil and tunnel is dealt with by using the surface to surface contact algorithm. To overcome computing power limitations and to deal with such a large-scale calculation, a parallel algorithm based on the modified recursive coordinate bisection (MRCB considering the balance of SSI and FSI loads is proposed and used. The whole simulation is accomplished on Dawning 5000 A using the proposed MRCB based parallel algorithm optimized to run on supercomputers. The simulation model and the proposed approaches are validated by comparison with the added mass method. Dynamic responses of the tunnel are analyzed and the parallelism is discussed. Besides, factors affecting the dynamic responses are investigated. Better speedup and parallel efficiency show the scalability of the parallel method and the analysis results can be used to aid in the design of water conveyance tunnels.

  7. Evaluating high risks in large-scale projects using an extended VIKOR method under a fuzzy environment

    Directory of Open Access Journals (Sweden)

    S. Ebrahimnejad

    2012-04-01

    Full Text Available The complexity of large-scale projects has led to numerous risks in their life cycle. This paper presents a new risk evaluation approach in order to rank the high risks in large-scale projects and improve the performance of these projects. It is based on the fuzzy set theory that is an effective tool to handle uncertainty. It is also based on an extended VIKOR method that is one of the well-known multiple criteria decision-making (MCDM methods. The proposed decision-making approach integrates knowledge and experience acquired from professional experts, since they perform the risk identification and also the subjective judgments of the performance rating for high risks in terms of conflicting criteria, including probability, impact, quickness of reaction toward risk, event measure quantity and event capability criteria. The most notable difference of the proposed VIKOR method with its traditional version is just the use of fuzzy decision-matrix data to calculate the ranking index without the need to ask the experts. Finally, the proposed approach is illustrated with a real-case study in an Iranian power plant project, and the associated results are compared with two well-known decision-making methods under a fuzzy environment.

  8. Verification of the analytical fracture assessments methods by a large scale pressure vessel test

    Energy Technology Data Exchange (ETDEWEB)

    Keinanen, H; Oberg, T; Rintamaa, R; Wallin, K

    1988-12-31

    This document deals with the use of fracture mechanics for the assessment of reactor pressure vessel. Tests have been carried out to verify the analytical fracture assessment methods. The analysis is focused on flaw dimensions and the scatter band of material characteristics. Results are provided and are compared to experimental ones. (TEC).

  9. Timetable-based simulation method for choice set generation in large-scale public transport networks

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær; Anderson, Marie Karen; Nielsen, Otto Anker

    2016-01-01

    The composition and size of the choice sets are a key for the correct estimation of and prediction by route choice models. While existing literature has posed a great deal of attention towards the generation of path choice sets for private transport problems, the same does not apply to public...... transport problems. This study proposes a timetable-based simulation method for generating path choice sets in a multimodal public transport network. Moreover, this study illustrates the feasibility of its implementation by applying the method to reproduce 5131 real-life trips in the Greater Copenhagen Area...... and to assess the choice set quality in a complex multimodal transport network. Results illustrate the applicability of the algorithm and the relevance of the utility specification chosen for the reproduction of real-life path choices. Moreover, results show that the level of stochasticity used in choice set...

  10. Numerical methods for the design of large-scale nonlinear discrete ill-posed inverse problems

    International Nuclear Information System (INIS)

    Haber, E; Horesh, L; Tenorio, L

    2010-01-01

    Design of experiments for discrete ill-posed problems is a relatively new area of research. While there has been some limited work concerning the linear case, little has been done to study design criteria and numerical methods for ill-posed nonlinear problems. We present an algorithmic framework for nonlinear experimental design with an efficient numerical implementation. The data are modeled as indirect, noisy observations of the model collected via a set of plausible experiments. An inversion estimate based on these data is obtained by a weighted Tikhonov regularization whose weights control the contribution of the different experiments to the data misfit term. These weights are selected by minimization of an empirical estimate of the Bayes risk that is penalized to promote sparsity. This formulation entails a bilevel optimization problem that is solved using a simple descent method. We demonstrate the viability of our design with a problem in electromagnetic imaging based on direct current resistivity and magnetotelluric data

  11. Large scale air pollution estimation method combining land use regression and chemical transport modeling in a geostatistical framework.

    Science.gov (United States)

    Akita, Yasuyuki; Baldasano, Jose M; Beelen, Rob; Cirach, Marta; de Hoogh, Kees; Hoek, Gerard; Nieuwenhuijsen, Mark; Serre, Marc L; de Nazelle, Audrey

    2014-04-15

    In recognition that intraurban exposure gradients may be as large as between-city variations, recent air pollution epidemiologic studies have become increasingly interested in capturing within-city exposure gradients. In addition, because of the rapidly accumulating health data, recent studies also need to handle large study populations distributed over large geographic domains. Even though several modeling approaches have been introduced, a consistent modeling framework capturing within-city exposure variability and applicable to large geographic domains is still missing. To address these needs, we proposed a modeling framework based on the Bayesian Maximum Entropy method that integrates monitoring data and outputs from existing air quality models based on Land Use Regression (LUR) and Chemical Transport Models (CTM). The framework was applied to estimate the yearly average NO2 concentrations over the region of Catalunya in Spain. By jointly accounting for the global scale variability in the concentration from the output of CTM and the intraurban scale variability through LUR model output, the proposed framework outperformed more conventional approaches.

  12. LARGE SCALE GLAZED

    DEFF Research Database (Denmark)

    Bache, Anja Margrethe

    2010-01-01

    OF SELECTED EXISTING BUILDINGS IN AND AROUND COPENHAGEN COVERED WITH MOSAIC TILES, UNGLAZED OR GLAZED CLAY TILES. ITS BUILDINGS WHICH HAVE QUALITIES THAT I WOULD LIKE APPLIED, PERHAPS TRANSFORMED OR MOST PREFERABLY, INTERPRETED ANEW, FOR THE LARGE GLAZED CONCRETE PANELS I AM DEVELOPING. KEYWORDS: COLOR, LIGHT...

  13. Method of coupling 1-D unsaturated flow with 3-D saturated flow on large scale

    Directory of Open Access Journals (Sweden)

    Yan Zhu

    2011-12-01

    Full Text Available A coupled unsaturated-saturated water flow numerical model was developed. The water flow in the unsaturated zone is considered the one-dimensional vertical flow, which changes in the horizontal direction according to the groundwater table and the atmospheric boundary conditions. The groundwater flow is treated as the three-dimensional water flow. The recharge flux to groundwater from soil water is considered the bottom flux for the numerical simulation in the unsaturated zone, and the upper flux for the groundwater simulation. It connects and unites the two separated water flow systems. The soil water equation is solved based on the assumed groundwater table and the subsequent predicted recharge flux. Then, the groundwater equation is solved with the predicted recharge flux as the upper boundary condition. Iteration continues until the discrepancy between the assumed and calculated groundwater nodal heads have a certain accuracy. Illustrative examples with different water flow scenarios regarding the Dirichlet boundary condition, the Neumann boundary condition, the atmospheric boundary condition, and the source or sink term were calculated by the coupled model. The results are compared with those of other models, including Hydrus-1D, SWMS-2D, and FEFLOW, which demonstrate that the coupled model is effective and accurate and can significantly reduce the computational time for the large number of nodes in saturated-unsaturated water flow simulation.

  14. Non-destructive screening method for radiation hardened performance of large scale integration

    International Nuclear Information System (INIS)

    Zhou Dong; Xi Shanbin; Guo Qi; Ren Diyuan; Li Yudong; Sun Jing; Wen Lin

    2013-01-01

    The space radiation environment could induce radiation damage on the electronic devices. As the performance of commercial devices is generally superior to that of radiation hardened devices, it is necessary to screen out the devices with good radiation hardened performance from the commercial devices and applying these devices to space systems could improve the reliability of the systems. Combining the mathematical regression analysis with the different physical stressing experiments, we investigated the non-destructive screening method for radiation hardened performance of the integrated circuit. The relationship between the change of typical parameters and the radiation performance of the circuit was discussed. The irradiation-sensitive parameters were confirmed. The pluralistic linear regression equation toward the prediction of the radiation performance was established. Finally, the regression equations under stress conditions were verified by practical irradiation. The results show that the reliability and accuracy of the non-destructive screening method can be elevated by combining the mathematical regression analysis with the practical stressing experiment. (authors)

  15. Two-Level Chebyshev Filter Based Complementary Subspace Method: Pushing the Envelope of Large-Scale Electronic Structure Calculations.

    Science.gov (United States)

    Banerjee, Amartya S; Lin, Lin; Suryanarayana, Phanish; Yang, Chao; Pask, John E

    2018-06-12

    We describe a novel iterative strategy for Kohn-Sham density functional theory calculations aimed at large systems (>1,000 electrons), applicable to metals and insulators alike. In lieu of explicit diagonalization of the Kohn-Sham Hamiltonian on every self-consistent field (SCF) iteration, we employ a two-level Chebyshev polynomial filter based complementary subspace strategy to (1) compute a set of vectors that span the occupied subspace of the Hamiltonian; (2) reduce subspace diagonalization to just partially occupied states; and (3) obtain those states in an efficient, scalable manner via an inner Chebyshev filter iteration. By reducing the necessary computation to just partially occupied states and obtaining these through an inner Chebyshev iteration, our approach reduces the cost of large metallic calculations significantly, while eliminating subspace diagonalization for insulating systems altogether. We describe the implementation of the method within the framework of the discontinuous Galerkin (DG) electronic structure method and show that this results in a computational scheme that can effectively tackle bulk and nano systems containing tens of thousands of electrons, with chemical accuracy, within a few minutes or less of wall clock time per SCF iteration on large-scale computing platforms. We anticipate that our method will be instrumental in pushing the envelope of large-scale ab initio molecular dynamics. As a demonstration of this, we simulate a bulk silicon system containing 8,000 atoms at finite temperature, and obtain an average SCF step wall time of 51 s on 34,560 processors; thus allowing us to carry out 1.0 ps of ab initio molecular dynamics in approximately 28 h (of wall time).

  16. Customized Mobile Apps: Improving data collection methods in large-scale field works in Finnish Lapland

    Science.gov (United States)

    Kupila, Juho

    2017-04-01

    Since the 1990s, a huge amount of data related to the groundwater and soil has been collected in several regional projects in Finland. EU -funded project "The coordination of groundwater protection and aggregates industry in Finnish Lapland, phase II" started in July 2016 and it covers the last unstudied areas in these projects in Finland. Project is carried out by Geological Survey of Finland (GTK), University of Oulu and Finnish Environment Institute and the main topic is to consolidate the groundwater protection and extractable use of soil resource in Lapland area. As earlier, several kinds of studies are also carried out throughout this three-year research and development project. These include e.g. drilling with setting up of groundwater observation wells, GPR-survey and many kinds of point-type observations, like sampling and general mapping on the field. Due to size of a study area (over 80 000 km2, about one quarter of a total area of Finland), improvement of the field work methods has become essential. To the general observation on the field, GTK has developed a specific mobile applications for Android -devices. With these Apps, data can be easily collected for example from a certain groundwater area and then uploaded directly to the GTK's database. Collected information may include sampling data, photos, layer observations, groundwater data etc. and it is all linked to the current GPS-location. New data is also easily available for post-processing. In this project the benefits of these applications will be field-tested and e.g. ergonomics, economy and usability in general will be taken account and related to the other data collecting methods, like working with heavy fieldwork laptops. Although these Apps are designed for usage in GTK's projects, they are free to download from Google Play for anyone interested. Geological Survey of Finland has the main role in this project with support from national and local authorities and stakeholders. Project is funded

  17. New method for large scale production of medically applicable Actinium-225 and Radium-223

    International Nuclear Information System (INIS)

    Aliev, R.A.; Vasilyev, A.N.; Ostapenko, V.; Kalmykov, S.N.; Zhuikov, B.L.; Ermolaev, S.V.; Lapshina, E.V.

    2014-01-01

    Alpha-emitters ( 211 At, 212 Bi, 213 Bi, 223 Ra, 225 Ac) are promising for targeted radiotherapy of cancer. Only two alpha decays near a cell membrane result in 50% death of cancer cell and only a single decay inside the cell is required for this. 225 Ac may be used either directly or as a mother radionuclide in 213 Bi isotope generator. Production of 225 Ac is provided by three main suppliers - Institute for Transuranium Elements in Germany, Oak Ridge National Laboratory in USA and Institute of Physics and Power Engineering in Obninsk, Russia. The current worldwide production of 225 Ac is approximately 1.7 Ci per year that corresponds to only 100-200 patients that could be treated annually. The common approach for 225 Ac production is separation from mother 229 Th or irradiation of 226 Ra with protons in a cyclotron. Both the methods have some practical limitations to be applied routinely. 225 Ac can be also produced by irradiation of natural thorium with medium energy protons . Cumulative cross sections of 225 Ac, 227 Ac, 227 Th, 228 Th formations have been obtained recently. Thorium targets (1-9 g) were irradiated by 114-91 MeV proton beam (1-50 μA) at INR linear accelerator. After dissolution in 8 M HNO 3 + 0.004 M HF thorium was removed by double LLX by HDEHP in toluene (1:1). Ac and REE were pre-concentrated and separated from Ra and most fission products by DGA-Resin (Triskem). After washing out by 0.01 M HNO 3 Ac was separated from REE by TRU Resin (Triskem) in 3 M HNO 3 media. About 6 mCi 225 Ac were separated in hot cell with chemical yield 85%. The method may be upscaled for production of Ci amounts of the radionuclide. The main impurity is 227 Ac (0.1% at the EOB) but it does not hinder 225 Ac from being used for medical 225 Ac/ 213 Bi generators. (author)

  18. Large-scale automated analysis of news media: a novel computational method for obesity policy research.

    Science.gov (United States)

    Hamad, Rita; Pomeranz, Jennifer L; Siddiqi, Arjumand; Basu, Sanjay

    2015-02-01

    Analyzing news media allows obesity policy researchers to understand popular conceptions about obesity, which is important for targeting health education and policies. A persistent dilemma is that investigators have to read and manually classify thousands of individual news articles to identify how obesity and obesity-related policy proposals may be described to the public in the media. A machine learning method called "automated content analysis" that permits researchers to train computers to "read" and classify massive volumes of documents was demonstrated. 14,302 newspaper articles that mentioned the word "obesity" during 2011-2012 were identified. Four states that vary in obesity prevalence and policy (Alabama, California, New Jersey, and North Carolina) were examined. The reliability of an automated program to categorize the media's framing of obesity as an individual-level problem (e.g., diet) and/or an environmental-level problem (e.g., obesogenic environment) was tested. The automated program performed similarly to human coders. The proportion of articles with individual-level framing (27.7-31.0%) was higher than the proportion with neutral (18.0-22.1%) or environmental-level framing (16.0-16.4%) across all states and over the entire study period (Pnews media was demonstrated. © 2014 The Obesity Society.

  19. Large-scale solar purchasing

    International Nuclear Information System (INIS)

    1999-01-01

    The principal objective of the project was to participate in the definition of a new IEA task concerning solar procurement (''the Task'') and to assess whether involvement in the task would be in the interest of the UK active solar heating industry. The project also aimed to assess the importance of large scale solar purchasing to UK active solar heating market development and to evaluate the level of interest in large scale solar purchasing amongst potential large scale purchasers (in particular housing associations and housing developers). A further aim of the project was to consider means of stimulating large scale active solar heating purchasing activity within the UK. (author)

  20. Spatio-temporal spike train analysis for large scale networks using the maximum entropy principle and Monte Carlo method

    International Nuclear Information System (INIS)

    Nasser, Hassan; Cessac, Bruno; Marre, Olivier

    2013-01-01

    Understanding the dynamics of neural networks is a major challenge in experimental neuroscience. For that purpose, a modelling of the recorded activity that reproduces the main statistics of the data is required. In the first part, we present a review on recent results dealing with spike train statistics analysis using maximum entropy models (MaxEnt). Most of these studies have focused on modelling synchronous spike patterns, leaving aside the temporal dynamics of the neural activity. However, the maximum entropy principle can be generalized to the temporal case, leading to Markovian models where memory effects and time correlations in the dynamics are properly taken into account. In the second part, we present a new method based on Monte Carlo sampling which is suited for the fitting of large-scale spatio-temporal MaxEnt models. The formalism and the tools presented here will be essential to fit MaxEnt spatio-temporal models to large neural ensembles. (paper)

  1. Mathematical methods in material science and large scale optimization workshops: Final report, June 1, 1995-November 30, 1996

    Energy Technology Data Exchange (ETDEWEB)

    Friedman, A. [Minnesota Univ., Minneapolis, MN (United States). Inst. for Mathematics and Its Applications

    1996-12-01

    The summer program in Large Scale Optimization concentrated largely on process engineering, aerospace engineering, inverse problems and optimal design, and molecular structure and protein folding. The program brought together application people, optimizers, and mathematicians with interest in learning about these topics. Three proceedings volumes are being prepared. The year in Materials Sciences deals with disordered media and percolation, phase transformations, composite materials, microstructure; topological and geometric methods as well as statistical mechanics approach to polymers (included were Monte Carlo simulation for polymers); miscellaneous other topics such as nonlinear optical material, particulate flow, and thin film. All these activities saw strong interaction among material scientists, mathematicians, physicists, and engineers. About 8 proceedings volumes are being prepared.

  2. An Efficient and Reliable Statistical Method for Estimating Functional Connectivity in Large Scale Brain Networks Using Partial Correlation.

    Science.gov (United States)

    Wang, Yikai; Kang, Jian; Kemmer, Phebe B; Guo, Ying

    2016-01-01

    Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant

  3. Large scale GW calculations

    International Nuclear Information System (INIS)

    Govoni, Marco; Argonne National Lab., Argonne, IL; Galli, Giulia; Argonne National Lab., Argonne, IL

    2015-01-01

    We present GW calculations of molecules, ordered and disordered solids and interfaces, which employ an efficient contour deformation technique for frequency integration and do not require the explicit evaluation of virtual electronic states nor the inversion of dielectric matrices. We also present a parallel implementation of the algorithm, which takes advantage of separable expressions of both the single particle Green's function and the screened Coulomb interaction. The method can be used starting from density functional theory calculations performed with semilocal or hybrid functionals. The newly developed technique was applied to GW calculations of systems of unprecedented size, including water/semiconductor interfaces with thousands of electrons

  4. Advances in a framework to compare bio-dosimetry methods for triage in large-scale radiation events

    International Nuclear Information System (INIS)

    Flood, Ann Barry; Boyle, Holly K.; Du, Gaixin; Demidenko, Eugene; Williams, Benjamin B.; Swartz, Harold M.; Nicolalde, Roberto J.

    2014-01-01

    Planning and preparation for a large-scale nuclear event would be advanced by assessing the applicability of potentially available bio-dosimetry methods. Using an updated comparative framework the performance of six bio-dosimetry methods was compared for five different population sizes (100-1 000 000) and two rates for initiating processing of the marker (15 or 15 000 people per hour) with four additional time windows. These updated factors are extrinsic to the bio-dosimetry methods themselves but have direct effects on each method's ability to begin processing individuals and the size of the population that can be accommodated. The results indicate that increased population size, along with severely compromised infrastructure, increases the time needed to triage, which decreases the usefulness of many time intensive dosimetry methods. This framework and model for evaluating bio-dosimetry provides important information for policy-makers and response planners to facilitate evaluation of each method and should advance coordination of these methods into effective triage plans. (authors)

  5. A Simple but Powerful Heuristic Method for Accelerating k-Means Clustering of Large-Scale Data in Life Science.

    Science.gov (United States)

    Ichikawa, Kazuki; Morishita, Shinichi

    2014-01-01

    K-means clustering has been widely used to gain insight into biological systems from large-scale life science data. To quantify the similarities among biological data sets, Pearson correlation distance and standardized Euclidean distance are used most frequently; however, optimization methods have been largely unexplored. These two distance measurements are equivalent in the sense that they yield the same k-means clustering result for identical sets of k initial centroids. Thus, an efficient algorithm used for one is applicable to the other. Several optimization methods are available for the Euclidean distance and can be used for processing the standardized Euclidean distance; however, they are not customized for this context. We instead approached the problem by studying the properties of the Pearson correlation distance, and we invented a simple but powerful heuristic method for markedly pruning unnecessary computation while retaining the final solution. Tests using real biological data sets with 50-60K vectors of dimensions 10-2001 (~400 MB in size) demonstrated marked reduction in computation time for k = 10-500 in comparison with other state-of-the-art pruning methods such as Elkan's and Hamerly's algorithms. The BoostKCP software is available at http://mlab.cb.k.u-tokyo.ac.jp/~ichikawa/boostKCP/.

  6. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  7. Gravitation on large scales

    Science.gov (United States)

    Giraud, E.

    A sample of dwarf and spiral galaxies with extended rotation curves is analysed, assuming that the fraction of dark matter is small. The objective of the paper is to prepare a framework for a theory, based on fundamental principles, that would give fits of the same quality as the phenomenology of dark halos. The following results are obtained: 1) The geodesics of massive systems with low density (Class I galaxies) can be described by the metric ds^2 = b^{-1}(r)dr^2 - b(r)dt^2 + r^2 dOmega^2 where b(r) = 1 - {2 over c^2}({{GM} over r} + gamma_f M^{1/2}) In this expression Gamma_f is a new fundamental constant which has been deduced from rotation curves of galaxies with circular velocity V_c^2 >= 2 {{GM} over r} for all r 2) The above metric is deduced from the conformal invariant metric ds^2 = B^{-1}(r)dr^2 - B(r)dt^2 + r^2 dOmega^2 where B(r) = 1 - {2 over c^2}({{GM} over r} + Gamma_f M^{1/2} + {1 over 3} {Gamma_f^2 over G}r) through a linear transform, u, of the linear special group SL(2, R) 3) The term {2 over c^2}Gamma_f M^{1/2} accounts for the difference between the observed rotation velocity and the Newtonian velocity. The term {2 over {3c^2}}{Gamma_f^2 over G}r is interpreted as a scale invariance between systems of different masses and sizes. 4) The metric B is a vacuum solution around a mass M deduced from the least action principle applied to the unique action I_a = -2 a int (-g)^{1/2} [R_{mu kappa}R^{ mu kappa} - 1/3(Ralphaalpha)^2] dx^4 built with the conformal Weyl tensor 5) For galaxies such that there is a radius, r_0, at which {{GM} over r_0} = Gamma M^{1/2} (Class II), the term Gamma M^{1/2} might be confined by the Newtonian potential yielding stationary solutions. 6) The analysed rotation curves of Class II galaxies are indeed well described with metrics of the form b(r) = 1 - {2 over c^2}({{GM} over r} + (n + 1) Gamma_0 M^{1/2}) where n is an integer and Gamma_0 = {1 over the square root of 3}Gamma_f 7) The effective potential is determined and

  8. Large-scale grid management

    International Nuclear Information System (INIS)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-01-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series

  9. Methods for assessing the socioeconomic impacts of large-scale resource developments: implications for nuclear repository siting

    International Nuclear Information System (INIS)

    Murdock, S.H.; Leistritz, F.L.

    1983-03-01

    An overview of the major methods presently available for assessing the socioeconomic impacts of large-scale resource developments and includes discussion of the implications and applications of such methods for nuclear-waste-repository siting are provided. The report: (1) summarizes conceptual approaches underlying, and methodological alternatives for, the conduct of impact assessments in each substantive area, and then enumerates advantages and disadvantages of each alternative; (2) describes factors related to the impact-assessment process, impact events, and the characteristics of rural areas that affect the magnitude and distribution of impacts and the assessment of impacts in each area; (3) provides a detailed review of those methodologies actually used in impact assessment for each area, describes advantages and problems encountered in the use of each method, and identifies the frequency of use and the general level of acceptance of each technique; and (4) summarizes the implications of each area of projection for the repository-siting process, the applicability of the methods for each area to the special and standard features of repositories, and makes general recommendations concerning specific methods and procedures that should be incorporated in assessments for siting areas

  10. Large-Scale Consumption and Zero-Waste Recycling Method of Red Mud in Steel Making Process

    Directory of Open Access Journals (Sweden)

    Guoshan Ning

    2018-03-01

    Full Text Available To release the environmental pressure from the massive discharge of bauxite residue (red mud, a novel recycling method of red mud in steel making process was investigated through high-temperature experiments and thermodynamic analysis. The results showed that after the reduction roasting of the carbon-bearing red mud pellets at 1100–1200 °C for 12–20 min, the metallic pellets were obtained with the metallization ratio of ≥88%. Then, the separation of slag and iron achieved from the metallic pellets at 1550 °C, after composition adjustment targeting the primary crystal region of the 12CaO·7Al2O3 phase. After iron removal and composition adjustment, the smelting-separation slag had good smelting performance and desulfurization capability, which meets the demand of sulfurization flux in steel making process. The pig iron quality meets the requirements of the high-quality raw material for steel making. In virtue of the huge scale and output of steel industry, the large-scale consumption and zero-waste recycling method of red mud was proposed, which comprised of the carbon-bearing red mud pellets roasting in the rotary hearth furnace and smelting separation in the electric arc furnace after composition adjustment.

  11. A new method for large-scale assessment of change in ecosystem functioning in relation to land degradation

    Science.gov (United States)

    Horion, Stephanie; Ivits, Eva; Verzandvoort, Simone; Fensholt, Rasmus

    2017-04-01

    Ongoing pressures on European land are manifold with extreme climate events and non-sustainable use of land resources being amongst the most important drivers altering the functioning of the ecosystems. The protection and conservation of European natural capital is one of the key objectives of the 7th Environmental Action Plan (EAP). The EAP stipulates that European land must be managed in a sustainable way by 2020 and the UN Sustainable development goals define a Land Degradation Neutral world as one of the targets. This implies that land degradation (LD) assessment of European ecosystems must be performed repeatedly allowing for the assessment of the current state of LD as well as changes compared to a baseline adopted by the UNCCD for the objective of land degradation neutrality. However, scientifically robust methods are still lacking for large-scale assessment of LD and repeated consistent mapping of the state of terrestrial ecosystems. Historical land degradation assessments based on various methods exist, but methods are generally non-replicable or difficult to apply at continental scale (Allan et al. 2007). The current lack of research methods applicable at large spatial scales is notably caused by the non-robust definition of LD, the scarcity of field data on LD, as well as the complex inter-play of the processes driving LD (Vogt et al., 2011). Moreover, the link between LD and changes in land use (how land use changes relates to change in vegetation productivity and ecosystem functioning) is not straightforward. In this study we used the segmented trend method developed by Horion et al. (2016) for large-scale systematic assessment of hotspots of change in ecosystem functioning in relation to LD. This method alleviates shortcomings of widely used linear trend model that does not account for abrupt change, nor adequately captures the actual changes in ecosystem functioning (de Jong et al. 2013; Horion et al. 2016). Here we present a new methodology for

  12. Comparison of zero-sequence injection methods in cascaded H-bridge multilevel converters for large-scale photovoltaic integration

    DEFF Research Database (Denmark)

    Yu, Yifan; Konstantinou, Georgios; Townsend, Christopher David

    2017-01-01

    to maintain three-phase balanced grid currents with unbalanced power generation. This study theoretically compares power balance capabilities of various zero-sequence injection methods based on two metrics which can be easily generalised for all CHB applications to PV systems. Experimental results based......Photovoltaic (PV) power generation levels in the three phases of a multilevel cascaded H-bridge (CHB) converter can be significantly unbalanced, owing to different irradiance levels and ambient temperatures over a large-scale solar PV power plant. Injection of a zero-sequence voltage is required...... on a 430 V, 10 kW, three-phase, seven-level cascaded H-bridge converter prototype confirm superior performance of the optimal zero-sequence injection technique....

  13. Large-scale structure of the Taurus molecular complex. II. Analysis of velocity fluctuations and turbulence. III. Methods for turbulence

    International Nuclear Information System (INIS)

    Kleiner, S.C.; Dickman, R.L.

    1985-01-01

    The velocity autocorrelation function (ACF) of observed spectral line centroid fluctuations is noted to effectively reproduce the actual ACF of turbulent gas motions within an interstellar cloud, thereby furnishing a framework for the study of the large scale velocity structure of the Taurus dark cloud complex traced by the present C-13O J = 1-0 observations of this region. The results obtained are discussed in the context of recent suggestions that widely observed correlations between molecular cloud widths and cloud sizes indicate the presence of a continuum of turbulent motions within the dense interstellar medium. Attention is then given to a method for the quantitative study of these turbulent motions, involving the mapping of a source in an optically thin spectral line and studying the spatial correlation properties of the resulting velocity centroid map. 61 references

  14. Predictability of the recent slowdown and subsequent recovery of large-scale surface warming using statistical methods

    Science.gov (United States)

    Mann, Michael E.; Steinman, Byron A.; Miller, Sonya K.; Frankcombe, Leela M.; England, Matthew H.; Cheung, Anson H.

    2016-04-01

    The temporary slowdown in large-scale surface warming during the early 2000s has been attributed to both external and internal sources of climate variability. Using semiempirical estimates of the internal low-frequency variability component in Northern Hemisphere, Atlantic, and Pacific surface temperatures in concert with statistical hindcast experiments, we investigate whether the slowdown and its recent recovery were predictable. We conclude that the internal variability of the North Pacific, which played a critical role in the slowdown, does not appear to have been predictable using statistical forecast methods. An additional minor contribution from the North Atlantic, by contrast, appears to exhibit some predictability. While our analyses focus on combining semiempirical estimates of internal climatic variability with statistical hindcast experiments, possible implications for initialized model predictions are also discussed.

  15. Surface-roughness-assisted formation of large-scale vertically aligned CdS nanorod arrays via solvothermal method

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Minmin [National Laboratory of Solid State Microstructures, School of Electronic Science and Engineering, Nanjing University, Nanjing 210093 (China); Yan, Shancheng, E-mail: yansc@njupt.edu.cn [National Laboratory of Solid State Microstructures, School of Electronic Science and Engineering, Nanjing University, Nanjing 210093 (China); School of Geography and Biological Information, Nanjing University of Posts and Telecommunications, Nanjing 210046 (China); Shi, Yi, E-mail: yshi@nju.edu.cn [National Laboratory of Solid State Microstructures, School of Electronic Science and Engineering, Nanjing University, Nanjing 210093 (China); Yang, Meng; Sun, Huabin; Wang, Jianyu; Yin, Yao; Gao, Fan [National Laboratory of Solid State Microstructures, School of Electronic Science and Engineering, Nanjing University, Nanjing 210093 (China)

    2013-05-15

    Large-scale cadmium sulfide (CdS) nanorod arrays were successfully synthesized on several different substrates through solvothermal reaction. During the growth experiments, we observed that the adhesion strength of the CdS nanorod arrays to different substrates differed dramatically, causing some of the CdS coating being easily flushed away by deionized water (DI water). With doubts and suspicions, we seriously investigate the original morphology of all the substrates by using atomic force microscopy (AFM). The phase, morphology, crystal structure and photoelectric property of all the products were characterized by X-ray diffractometer (XRD), scanning electron microscopy (SEM), transmission electron microscopy (TEM), Raman spectroscopy and current–voltage (I–V) probe station. The growth mechanism of solvothermal reaction was proposed on the basis of all the characterizations. Our approach presents a universal method of liquid phase epitaxy of 1D material on a wide range of substrates of any shape.

  16. Fabrication of Fe-6.5wt%Si Ribbons by Melt Spinning Method on Large Scale

    Directory of Open Access Journals (Sweden)

    Y. F. Liang

    2015-01-01

    Full Text Available Melt spinning method has been widely applied for fabrication of Fe-based amorphous/nanocrystalline ribbons in industry. Compared with Fe-based amorphous/nanocrystalline alloys, Fe-6.5wt%Si high silicon steel is of low cost and has comparable excellent soft magnetic properties. Due to higher melting point and absence of supercooled liquid region, fabrication of Fe-6.5wt%Si ribbons is very hard and is only on lab scale. In this paper, we report that large scale fabrication of Fe-6.5wt%Si ribbons was successful and microstructures, ordered structures, and mechanical and soft magnetic properties of the ribbons were investigated. Due to rapid solidification rate, the ribbons were of ultrafine grains, and low degree of order and exhibited some extent of bending and tensile ductility. After heat treatment, excellent soft magnetic properties were obtained. Due to near-zero magnetostriction, the ribbons are promising to be used in electric devices with high frequencies where low noises are required.

  17. Automating large-scale reactor systems

    International Nuclear Information System (INIS)

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig

  18. Decentralized Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2013-01-01

    problem is formulated as a centralized large-scale optimization problem but is then decomposed into smaller subproblems that are solved locally by each unit connected to an aggregator. For large-scale systems the method is faster than solving the full problem and can be distributed to include an arbitrary...

  19. Large scale nuclear structure studies

    International Nuclear Information System (INIS)

    Faessler, A.

    1985-01-01

    Results of large scale nuclear structure studies are reported. The starting point is the Hartree-Fock-Bogoliubov solution with angular momentum and proton and neutron number projection after variation. This model for number and spin projected two-quasiparticle excitations with realistic forces yields in sd-shell nuclei similar good results as the 'exact' shell-model calculations. Here the authors present results for a pf-shell nucleus 46 Ti and results for the A=130 mass region where they studied 58 different nuclei with the same single-particle energies and the same effective force derived from a meson exchange potential. They carried out a Hartree-Fock-Bogoliubov variation after mean field projection in realistic model spaces. In this way, they determine for each yrast state the optimal mean Hartree-Fock-Bogoliubov field. They apply this method to 130 Ce and 128 Ba using the same effective nucleon-nucleon interaction. (Auth.)

  20. Non-smooth optimization methods for large-scale problems: applications to mid-term power generation planning

    International Nuclear Information System (INIS)

    Emiel, G.

    2008-01-01

    This manuscript deals with large-scale non-smooth optimization that may typically arise when performing Lagrangian relaxation of difficult problems. This technique is commonly used to tackle mixed-integer linear programming - or large-scale convex problems. For example, a classical approach when dealing with power generation planning problems in a stochastic environment is to perform a Lagrangian relaxation of the coupling constraints of demand. In this approach, a master problem coordinates local subproblems, specific to each generation unit. The master problem deals with a separable non-smooth dual function which can be maximized with, for example, bundle algorithms. In chapter 2, we introduce basic tools of non-smooth analysis and some recent results regarding incremental or inexact instances of non-smooth algorithms. However, in some situations, the dual problem may still be very hard to solve. For instance, when the number of dualized constraints is very large (exponential in the dimension of the primal problem), explicit dualization may no longer be possible or the update of dual variables may fail. In order to reduce the dual dimension, different heuristics were proposed. They involve a separation procedure to dynamically select a restricted set of constraints to be dualized along the iterations. This relax-and-cut type approach has shown its numerical efficiency in many combinatorial problems. In chapter 3, we show Primal-dual convergence of such strategy when using an adapted sub-gradient method for the dual step and under minimal assumptions on the separation procedure. Another limit of Lagrangian relaxation may appear when the dual function is separable in highly numerous or complex sub-functions. In such situation, the computational burden of solving all local subproblems may be preponderant in the whole iterative process. A natural strategy would be here to take full advantage of the dual separable structure, performing a dual iteration after having

  1. Large scale structure and baryogenesis

    International Nuclear Information System (INIS)

    Kirilova, D.P.; Chizhov, M.V.

    2001-08-01

    We discuss a possible connection between the large scale structure formation and the baryogenesis in the universe. An update review of the observational indications for the presence of a very large scale 120h -1 Mpc in the distribution of the visible matter of the universe is provided. The possibility to generate a periodic distribution with the characteristic scale 120h -1 Mpc through a mechanism producing quasi-periodic baryon density perturbations during inflationary stage, is discussed. The evolution of the baryon charge density distribution is explored in the framework of a low temperature boson condensate baryogenesis scenario. Both the observed very large scale of a the visible matter distribution in the universe and the observed baryon asymmetry value could naturally appear as a result of the evolution of a complex scalar field condensate, formed at the inflationary stage. Moreover, for some model's parameters a natural separation of matter superclusters from antimatter ones can be achieved. (author)

  2. A Novel Method for Proximity Detection of Moving Targets Using a Large-Scale Planar Capacitive Sensor System

    Directory of Open Access Journals (Sweden)

    Yong Ye

    2016-05-01

    Full Text Available A novel method for proximity detection of moving targets (with high dielectric constants using a large-scale (the size of each sensor is 31 cm × 19 cm planar capacitive sensor system (PCSS is proposed. The capacitive variation with distance is derived, and a pair of electrodes in a planar capacitive sensor unit (PCSU with a spiral shape is found to have better performance on sensitivity distribution homogeneity and dynamic range than three other shapes (comb shape, rectangular shape, and circular shape. A driving excitation circuit with a Clapp oscillator is proposed, and a capacitance measuring circuit with sensitivity of 0.21 V p − p / pF is designed. The results of static experiments and dynamic experiments demonstrate that the voltage curves of static experiments are similar to those of dynamic experiments; therefore, the static data can be used to simulate the dynamic curves. The dynamic range of proximity detection for three projectiles is up to 60 cm, and the results of the following static experiments show that the PCSU with four neighboring units has the highest sensitivity (the sensitivities of other units are at least 4% lower; when the attack angle decreases, the intensity of sensor signal increases. This proposed method leads to the design of a feasible moving target detector with simple structure and low cost, which can be applied in the interception system.

  3. Large-scale uniform ZnO tetrapods on catalyst free glass substrate by thermal evaporation method

    Energy Technology Data Exchange (ETDEWEB)

    Alsultany, Forat H., E-mail: foratusm@gmail.com [School of Physics, USM, 11800 Penang (Malaysia); Hassan, Z. [Institute of Nano-Optoelectronics Research and Technology Laboratory (INOR), USM, 11800 Penang (Malaysia); Ahmed, Naser M. [School of Physics, USM, 11800 Penang (Malaysia)

    2016-07-15

    Highlights: • Investigate the growth of ZnO-Ts on glass substrate by thermal evaporation method. • Glass substrate without any catalyst or a seed layer. • The morphology was controlled by adjusting the temperature of the material and the substrate. • Glass substrate was placed vertically in the quartz tube. - Abstract: Here, we report for the first time the catalyst-free growth of large-scale uniform shape and size ZnO tetrapods on a glass substrate via thermal evaporation method. Three-dimensional networks of ZnO tetrapods have needle–wire junctions, an average leg length of 2.1–2.6 μm, and a diameter of 35–240 nm. The morphology and structure of ZnO tetrapods were investigated by controlling the preparation temperature of each of the Zn powder and the glass substrate under O{sub 2} and Ar gases. Studies were carried out on ZnO tetrapods using X-ray diffraction, field emission scanning electron microscopy, UV–vis spectrophotometer, and a photoluminescence. The results showed that the sample grow in the hexagonal wurtzite structure with preferentially oriented along (002) direction, good crystallinity and high transmittance. The band gap value is about 3.27 eV. Photoluminescence spectrum exhibits a very sharp peak at 378 nm and a weak broad green emission.

  4. A simple method to fabricate an NIR detector by PbTe nanowires in a large scale

    International Nuclear Information System (INIS)

    Baghchesara, Mohammad Amin; Yousefi, Ramin; Cheraghizade, Mohsen; Jamali-Sheini, Farid; Saáedi, Abdolhossein; Mahmmoudian, M.R.

    2016-01-01

    Highlights: • PbTe nanowires were grown by tellurization of the Pb sheets for the first time. • It was observed a band gap value for the PbTe nanostructures in the NIR region. • NIR detector was fabricated in a large scale using a simple method. • Effect of Te concentration on morphology of PbTe nanostructures was investigated. - Abstract: A simple method was used to fabricate a near-infrared (NIR) detector using PbTe nanostructures. Samples were synthesized by tellurization of lead sheets in a tube furnace. PbTe nanostructures with wires and flakes shapes were grown on the lead sheets that were placed at 300 and 330 °C, respectively, while, PbTe nanoporous were grown at 360 and 390 °C. X-ray diffraction patterns and X-ray photoelectron spectra results indicated that, the PbTe phase was formed in all samples. UV–vis diffuse reflectance spectra measurements showed a band gap for the PbTe nanostructures in the near-infrared region of the electromagnetic spectrum. Actually, the results indicated that, the band gap values of the PbTe nanowires and nanoporous were 1.54 eV and 1.61 eV, respectively. Finally, the PbTe nanostructures were used as a simple photoresponse device under a red light source. The photoresponse results revealed, PbTe nanowires are promising for photoelectrical applications in the NIR region.

  5. Dynamic subgrid scale model used in a deep bundle turbulence prediction using the large eddy simulation method

    International Nuclear Information System (INIS)

    Barsamian, H.R.; Hassan, Y.A.

    1996-01-01

    Turbulence is one of the most commonly occurring phenomena of engineering interest in the field of fluid mechanics. Since most flows are turbulent, there is a significant payoff for improved predictive models of turbulence. One area of concern is the turbulent buffeting forces experienced by the tubes in steam generators of nuclear power plants. Although the Navier-Stokes equations are able to describe turbulent flow fields, the large number of scales of turbulence limit practical flow field calculations with current computing power. The dynamic subgrid scale closure model of Germano et. al (1991) is used in the large eddy simulation code GUST for incompressible isothermal flows. Tube bundle geometries of staggered and non-staggered arrays are considered in deep bundle simulations. The advantage of the dynamic subgrid scale model is the exclusion of an input model coefficient. The model coefficient is evaluated dynamically for each nodal location in the flow domain. Dynamic subgrid scale results are obtained in the form of power spectral densities and flow visualization of turbulent characteristics. Comparisons are performed among the dynamic subgrid scale model, the Smagorinsky eddy viscosity model (Smagorinsky, 1963) (that is used as the base model for the dynamic subgrid scale model) and available experimental data. Spectral results of the dynamic subgrid scale model correlate better with experimental data. Satisfactory turbulence characteristics are observed through flow visualization

  6. Japanese large-scale interferometers

    CERN Document Server

    Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K

    2002-01-01

    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.

  7. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  8. Participatory Bluetooth Sensing: A Method for Acquiring Spatio-Temporal Data about Participant Mobility and Interactions at Large Scale Events

    DEFF Research Database (Denmark)

    Stopczynski, Arkadiusz; Larsen, Jakob Eg; Jørgensen, Sune Lehmann

    2013-01-01

    for collecting spatio-temporal data about participant mobility and social interactions uses the capabilities of Bluetooth capable smartphones carried by participants. As a proof-of-concept we present a field study with deployment of the method in a large music festival with 130 000 participants where a small...

  9. Large scale cluster computing workshop

    International Nuclear Information System (INIS)

    Dane Skow; Alan Silverman

    2002-01-01

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community

  10. Evaluation of Normalization Methods to Pave the Way Towards Large-Scale LC-MS-Based Metabolomics Profiling Experiments

    Science.gov (United States)

    Valkenborg, Dirk; Baggerman, Geert; Vanaerschot, Manu; Witters, Erwin; Dujardin, Jean-Claude; Burzykowski, Tomasz; Berg, Maya

    2013-01-01

    Abstract Combining liquid chromatography-mass spectrometry (LC-MS)-based metabolomics experiments that were collected over a long period of time remains problematic due to systematic variability between LC-MS measurements. Until now, most normalization methods for LC-MS data are model-driven, based on internal standards or intermediate quality control runs, where an external model is extrapolated to the dataset of interest. In the first part of this article, we evaluate several existing data-driven normalization approaches on LC-MS metabolomics experiments, which do not require the use of internal standards. According to variability measures, each normalization method performs relatively well, showing that the use of any normalization method will greatly improve data-analysis originating from multiple experimental runs. In the second part, we apply cyclic-Loess normalization to a Leishmania sample. This normalization method allows the removal of systematic variability between two measurement blocks over time and maintains the differential metabolites. In conclusion, normalization allows for pooling datasets from different measurement blocks over time and increases the statistical power of the analysis, hence paving the way to increase the scale of LC-MS metabolomics experiments. From our investigation, we recommend data-driven normalization methods over model-driven normalization methods, if only a few internal standards were used. Moreover, data-driven normalization methods are the best option to normalize datasets from untargeted LC-MS experiments. PMID:23808607

  11. A Decomposition-Based Pricing Method for Solving a Large-Scale MILP Model for an Integrated Fishery

    Directory of Open Access Journals (Sweden)

    M. Babul Hasan

    2007-01-01

    The IFP can be decomposed into a trawler-scheduling subproblem and a fish-processing subproblem in two different ways by relaxing different sets of constraints. We tried conventional decomposition techniques including subgradient optimization and Dantzig-Wolfe decomposition, both of which were unacceptably slow. We then developed a decomposition-based pricing method for solving the large fishery model, which gives excellent computation times. Numerical results for several planning horizon models are presented.

  12. An easy, low-cost method to transfer large-scale graphene onto polyethylene terephthalate as a transparent conductive flexible substrate

    International Nuclear Information System (INIS)

    Chen, Chih-Sheng; Hsieh, Chien-Kuo

    2014-01-01

    In this study, we develop a low-cost method for transferring a large-scale graphene film onto a flexible transparent substrate. An easily accessible method for home-made chemical vapor deposition (CVD) and a commercial photograph laminator were utilized to fabricate the low-cost graphene-based transparent conductive flexible substrate. The graphene was developed based on CVD growth on nickel foil using a carbon gas source, and the graphene thin film was easily transferred onto the laminating film via a heated photograph laminator. Field emission scanning electron microscopy and atomic force microscopy were utilized to examine the morphological characteristics of the graphene surface. Raman spectroscopy and transmission electron microscopy were utilized to examine the microstructure of the graphene. The optical–electronic properties of the transferred graphene flexible thin film were measured by ultraviolet–visible spectrometry and a four-point probe. The advantage of this method is that large-scale graphene-based thin films can be easily obtained. We provide an economical method for fabricating a graphene-based transparent conductive flexible substrate. - Highlight: • We synthesized the large-scale graphene by thermal CVD method. • A low-cost commercial photograph laminator was used to transfer graphene. • A large-scale transparent and flexible graphene substrate was obtained easily

  13. An easy, low-cost method to transfer large-scale graphene onto polyethylene terephthalate as a transparent conductive flexible substrate

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Chih-Sheng; Hsieh, Chien-Kuo, E-mail: jack_hsieh@mail.mcut.edu.tw

    2014-11-03

    In this study, we develop a low-cost method for transferring a large-scale graphene film onto a flexible transparent substrate. An easily accessible method for home-made chemical vapor deposition (CVD) and a commercial photograph laminator were utilized to fabricate the low-cost graphene-based transparent conductive flexible substrate. The graphene was developed based on CVD growth on nickel foil using a carbon gas source, and the graphene thin film was easily transferred onto the laminating film via a heated photograph laminator. Field emission scanning electron microscopy and atomic force microscopy were utilized to examine the morphological characteristics of the graphene surface. Raman spectroscopy and transmission electron microscopy were utilized to examine the microstructure of the graphene. The optical–electronic properties of the transferred graphene flexible thin film were measured by ultraviolet–visible spectrometry and a four-point probe. The advantage of this method is that large-scale graphene-based thin films can be easily obtained. We provide an economical method for fabricating a graphene-based transparent conductive flexible substrate. - Highlight: • We synthesized the large-scale graphene by thermal CVD method. • A low-cost commercial photograph laminator was used to transfer graphene. • A large-scale transparent and flexible graphene substrate was obtained easily.

  14. Large-scale pool fires

    Directory of Open Access Journals (Sweden)

    Steinhaus Thomas

    2007-01-01

    Full Text Available A review of research into the burning behavior of large pool fires and fuel spill fires is presented. The features which distinguish such fires from smaller pool fires are mainly associated with the fire dynamics at low source Froude numbers and the radiative interaction with the fire source. In hydrocarbon fires, higher soot levels at increased diameters result in radiation blockage effects around the perimeter of large fire plumes; this yields lower emissive powers and a drastic reduction in the radiative loss fraction; whilst there are simplifying factors with these phenomena, arising from the fact that soot yield can saturate, there are other complications deriving from the intermittency of the behavior, with luminous regions of efficient combustion appearing randomly in the outer surface of the fire according the turbulent fluctuations in the fire plume. Knowledge of the fluid flow instabilities, which lead to the formation of large eddies, is also key to understanding the behavior of large-scale fires. Here modeling tools can be effectively exploited in order to investigate the fluid flow phenomena, including RANS- and LES-based computational fluid dynamics codes. The latter are well-suited to representation of the turbulent motions, but a number of challenges remain with their practical application. Massively-parallel computational resources are likely to be necessary in order to be able to adequately address the complex coupled phenomena to the level of detail that is necessary.

  15. Large-scale calculations of solid oxide fuel cell cermet anode by tight-binding quantum chemistry method

    International Nuclear Information System (INIS)

    Koyama, Michihisa; Kubo, Momoji; Miyamoto, Akira

    2005-01-01

    Improvement of anode characteristics of solid oxide fuel cells is important for the better cell performance and especially the direct use of hydrocarbons. A mixture of ceramics and metal is generally used as anode, and different combinations of ceramics and metals lead to different electrode characteristics. We performed large-scale calculations to investigate the characteristics of Ni/CeO 2 and Cu/CeO 2 anodes at the electronic level using our tight-binding quantum chemical molecular dynamics program. Charge distribution analysis clarified the electron transfer from metal to oxide in both anodes. The calculations of density of states clarified different contributions of Ni and Cu orbitals to the energy levels at around Fermi level in each cermet. Based on the obtained results, we made considerations to explain different characteristics of both cermet anodes. The effectiveness of our approach for the investigation of complex cermet system was proved

  16. Creating Large Scale Database Servers

    International Nuclear Information System (INIS)

    Becla, Jacek

    2001-01-01

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region

  17. Creating Large Scale Database Servers

    Energy Technology Data Exchange (ETDEWEB)

    Becla, Jacek

    2001-12-14

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region.

  18. Large scale cross hole testing

    International Nuclear Information System (INIS)

    Ball, J.K.; Black, J.H.; Doe, T.

    1991-05-01

    As part of the Site Characterisation and Validation programme the results of the large scale cross hole testing have been used to document hydraulic connections across the SCV block, to test conceptual models of fracture zones and obtain hydrogeological properties of the major hydrogeological features. The SCV block is highly heterogeneous. This heterogeneity is not smoothed out even over scales of hundreds of meters. Results of the interpretation validate the hypothesis of the major fracture zones, A, B and H; not much evidence of minor fracture zones is found. The uncertainty in the flow path, through the fractured rock, causes sever problems in interpretation. Derived values of hydraulic conductivity were found to be in a narrow range of two to three orders of magnitude. Test design did not allow fracture zones to be tested individually. This could be improved by testing the high hydraulic conductivity regions specifically. The Piezomac and single hole equipment worked well. Few, if any, of the tests ran long enough to approach equilibrium. Many observation boreholes showed no response. This could either be because there is no hydraulic connection, or there is a connection but a response is not seen within the time scale of the pumping test. The fractional dimension analysis yielded credible results, and the sinusoidal testing procedure provided an effective means of identifying the dominant hydraulic connections. (10 refs.) (au)

  19. Prospective and Retrospective Studies of Substance Abuse Treatment Outcomes: Methods and Results of Four Large-Scale Follow-Up Studies.

    Science.gov (United States)

    Gerstein, Dean R.; Johnson, Robert A.

    This report compares the research methods, provider and patient characteristics, and outcome results from four large-scale followup studies of drug treatment during the 1990s: (1) the California Drug and Alcohol Treatment Assessment (CALDATA); (2) Services Research Outcomes Study (SROS); (3) National Treatment Improvement Evaluation Study (NTIES);…

  20. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  1. Large-scale Complex IT Systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2011-01-01

    This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that identifies the major challen...

  2. Large-scale complex IT systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2012-01-01

    12 pages, 2 figures This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that ident...

  3. Large-scale systematic analysis of 2D fingerprint methods and parameters to improve virtual screening enrichments.

    Science.gov (United States)

    Sastry, Madhavi; Lowrie, Jeffrey F; Dixon, Steven L; Sherman, Woody

    2010-05-24

    A systematic virtual screening study on 11 pharmaceutically relevant targets has been conducted to investigate the interrelation between 8 two-dimensional (2D) fingerprinting methods, 13 atom-typing schemes, 13 bit scaling rules, and 12 similarity metrics using the new cheminformatics package Canvas. In total, 157 872 virtual screens were performed to assess the ability of each combination of parameters to identify actives in a database screen. In general, fingerprint methods, such as MOLPRINT2D, Radial, and Dendritic that encode information about local environment beyond simple linear paths outperformed other fingerprint methods. Atom-typing schemes with more specific information, such as Daylight, Mol2, and Carhart were generally superior to more generic atom-typing schemes. Enrichment factors across all targets were improved considerably with the best settings, although no single set of parameters performed optimally on all targets. The size of the addressable bit space for the fingerprints was also explored, and it was found to have a substantial impact on enrichments. Small bit spaces, such as 1024, resulted in many collisions and in a significant degradation in enrichments compared to larger bit spaces that avoid collisions.

  4. Large scale biomimetic membrane arrays

    DEFF Research Database (Denmark)

    Hansen, Jesper Søndergaard; Perry, Mark; Vogel, Jörg

    2009-01-01

    To establish planar biomimetic membranes across large scale partition aperture arrays, we created a disposable single-use horizontal chamber design that supports combined optical-electrical measurements. Functional lipid bilayers could easily and efficiently be established across CO2 laser micro......-structured 8 x 8 aperture partition arrays with average aperture diameters of 301 +/- 5 mu m. We addressed the electro-physical properties of the lipid bilayers established across the micro-structured scaffold arrays by controllable reconstitution of biotechnological and physiological relevant membrane...... peptides and proteins. Next, we tested the scalability of the biomimetic membrane design by establishing lipid bilayers in rectangular 24 x 24 and hexagonal 24 x 27 aperture arrays, respectively. The results presented show that the design is suitable for further developments of sensitive biosensor assays...

  5. Conference on Large Scale Optimization

    CERN Document Server

    Hearn, D; Pardalos, P

    1994-01-01

    On February 15-17, 1993, a conference on Large Scale Optimization, hosted by the Center for Applied Optimization, was held at the University of Florida. The con­ ference was supported by the National Science Foundation, the U. S. Army Research Office, and the University of Florida, with endorsements from SIAM, MPS, ORSA and IMACS. Forty one invited speakers presented papers on mathematical program­ ming and optimal control topics with an emphasis on algorithm development, real world applications and numerical results. Participants from Canada, Japan, Sweden, The Netherlands, Germany, Belgium, Greece, and Denmark gave the meeting an important international component. At­ tendees also included representatives from IBM, American Airlines, US Air, United Parcel Serice, AT & T Bell Labs, Thinking Machines, Army High Performance Com­ puting Research Center, and Argonne National Laboratory. In addition, the NSF sponsored attendance of thirteen graduate students from universities in the United States and abro...

  6. Large-scale river regulation

    International Nuclear Information System (INIS)

    Petts, G.

    1994-01-01

    Recent concern over human impacts on the environment has tended to focus on climatic change, desertification, destruction of tropical rain forests, and pollution. Yet large-scale water projects such as dams, reservoirs, and inter-basin transfers are among the most dramatic and extensive ways in which our environment has been, and continues to be, transformed by human action. Water running to the sea is perceived as a lost resource, floods are viewed as major hazards, and wetlands are seen as wastelands. River regulation, involving the redistribution of water in time and space, is a key concept in socio-economic development. To achieve water and food security, to develop drylands, and to prevent desertification and drought are primary aims for many countries. A second key concept is ecological sustainability. Yet the ecology of rivers and their floodplains is dependent on the natural hydrological regime, and its related biochemical and geomorphological dynamics. (Author)

  7. The Software Reliability of Large Scale Integration Circuit and Very Large Scale Integration Circuit

    OpenAIRE

    Artem Ganiyev; Jan Vitasek

    2010-01-01

    This article describes evaluation method of faultless function of large scale integration circuits (LSI) and very large scale integration circuits (VLSI). In the article there is a comparative analysis of factors which determine faultless of integrated circuits, analysis of already existing methods and model of faultless function evaluation of LSI and VLSI. The main part describes a proposed algorithm and program for analysis of fault rate in LSI and VLSI circuits.

  8. Stability of large scale interconnected dynamical systems

    International Nuclear Information System (INIS)

    Akpan, E.P.

    1993-07-01

    Large scale systems modelled by a system of ordinary differential equations are considered and necessary and sufficient conditions are obtained for the uniform asymptotic connective stability of the systems using the method of cone-valued Lyapunov functions. It is shown that this model significantly improves the existing models. (author). 9 refs

  9. Comparison of Conjugate Gradient Density Matrix Search and Chebyshev Expansion Methods for Avoiding Diagonalization in Large-Scale Electronic Structure Calculations

    Science.gov (United States)

    Bates, Kevin R.; Daniels, Andrew D.; Scuseria, Gustavo E.

    1998-01-01

    We report a comparison of two linear-scaling methods which avoid the diagonalization bottleneck of traditional electronic structure algorithms. The Chebyshev expansion method (CEM) is implemented for carbon tight-binding calculations of large systems and its memory and timing requirements compared to those of our previously implemented conjugate gradient density matrix search (CG-DMS). Benchmark calculations are carried out on icosahedral fullerenes from C60 to C8640 and the linear scaling memory and CPU requirements of the CEM demonstrated. We show that the CPU requisites of the CEM and CG-DMS are similar for calculations with comparable accuracy.

  10. Large Scale Screening of Low Cost Ferritic Steel Designs For Advanced Ultra Supercritical Boiler Using First Principles Methods

    Energy Technology Data Exchange (ETDEWEB)

    Ouyang, Lizhi [Tennessee State Univ. Nashville, TN (United States)

    2016-11-29

    Advanced Ultra Supercritical Boiler (AUSC) requires materials that can operate in corrosive environment at temperature and pressure as high as 760°C (or 1400°F) and 5000psi, respectively, while at the same time maintain good ductility at low temperature. We develop automated simulation software tools to enable fast large scale screening studies of candidate designs. While direct evaluation of creep rupture strength and ductility are currently not feasible, properties such as energy, elastic constants, surface energy, interface energy, and stack fault energy can be used to assess their relative ductility and creeping strength. We implemented software to automate the complex calculations to minimize human inputs in the tedious screening studies which involve model structures generation, settings for first principles calculations, results analysis and reporting. The software developed in the project and library of computed mechanical properties of phases found in ferritic steels, many are complex solid solutions estimated for the first time, will certainly help the development of low cost ferritic steel for AUSC.

  11. Large Scale Screening of Southern African Plant Extracts for the Green Synthesis of Gold Nanoparticles Using Microtitre-Plate Method

    Directory of Open Access Journals (Sweden)

    Abdulrahman M. Elbagory

    2016-11-01

    Full Text Available The preparation of gold nanoparticles (AuNPs involves a variety of chemical and physical methods. These methods use toxic and environmentally harmful chemicals. Consequently, the synthesis of AuNPs using green chemistry has been under investigation to develop eco-friendly nanoparticles. One approach to achieve this is the use of plant-derived phytochemicals that are capable of reducing gold ions to produce AuNPs. The aim of this study was to implement a facile microtitre-plate method to screen a large number of aqueous plant extracts to determine the optimum concentration (OC for the bio-synthesis of the AuNPs. Several AuNPs of different sizes and shapes were successfully synthesized and characterized from 17 South African plants. The characterization was done using Ultra Violet-Visible Spectroscopy, Dynamic Light Scattering, High Resolution Transmission Electron Microscopy and Energy-Dispersive X-ray Spectroscopy. We also studied the effects of temperature on the synthesis of the AuNPs and showed that changes in temperatures affect the size and dispersity of the generated AuNPs. We also evaluated the stability of the synthesized AuNPs and showed that some of them are stable in biological buffer solutions.

  12. Large-scale binding ligand prediction by improved patch-based method Patch-Surfer2.0.

    Science.gov (United States)

    Zhu, Xiaolei; Xiong, Yi; Kihara, Daisuke

    2015-03-01

    Ligand binding is a key aspect of the function of many proteins. Thus, binding ligand prediction provides important insight in understanding the biological function of proteins. Binding ligand prediction is also useful for drug design and examining potential drug side effects. We present a computational method named Patch-Surfer2.0, which predicts binding ligands for a protein pocket. By representing and comparing pockets at the level of small local surface patches that characterize physicochemical properties of the local regions, the method can identify binding pockets of the same ligand even if they do not share globally similar shapes. Properties of local patches are represented by an efficient mathematical representation, 3D Zernike Descriptor. Patch-Surfer2.0 has significant technical improvements over our previous prototype, which includes a new feature that captures approximate patch position with a geodesic distance histogram. Moreover, we constructed a large comprehensive database of ligand binding pockets that will be searched against by a query. The benchmark shows better performance of Patch-Surfer2.0 over existing methods. http://kiharalab.org/patchsurfer2.0/ CONTACT: dkihara@purdue.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Large-scale galaxy bias

    Science.gov (United States)

    Desjacques, Vincent; Jeong, Donghui; Schmidt, Fabian

    2018-02-01

    This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy statistics. We then review the excursion-set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  14. Large-scale galaxy bias

    Science.gov (United States)

    Jeong, Donghui; Desjacques, Vincent; Schmidt, Fabian

    2018-01-01

    Here, we briefly introduce the key results of the recent review (arXiv:1611.09787), whose abstract is as following. This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy (or halo) statistics. We then review the excursion set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  15. Reviving large-scale projects

    International Nuclear Information System (INIS)

    Desiront, A.

    2003-01-01

    For the past decade, most large-scale hydro development projects in northern Quebec have been put on hold due to land disputes with First Nations. Hydroelectric projects have recently been revived following an agreement signed with Aboriginal communities in the province who recognized the need to find new sources of revenue for future generations. Many Cree are working on the project to harness the waters of the Eastmain River located in the middle of their territory. The work involves building an 890 foot long dam, 30 dikes enclosing a 603 square-km reservoir, a spillway, and a power house with 3 generating units with a total capacity of 480 MW of power for start-up in 2007. The project will require the use of 2,400 workers in total. The Cree Construction and Development Company is working on relations between Quebec's 14,000 Crees and the James Bay Energy Corporation, the subsidiary of Hydro-Quebec which is developing the project. Approximately 10 per cent of the $735-million project has been designated for the environmental component. Inspectors ensure that the project complies fully with environmental protection guidelines. Total development costs for Eastmain-1 are in the order of $2 billion of which $735 million will cover work on site and the remainder will cover generating units, transportation and financial charges. Under the treaty known as the Peace of the Braves, signed in February 2002, the Quebec government and Hydro-Quebec will pay the Cree $70 million annually for 50 years for the right to exploit hydro, mining and forest resources within their territory. The project comes at a time when electricity export volumes to the New England states are down due to growth in Quebec's domestic demand. Hydropower is a renewable and non-polluting source of energy that is one of the most acceptable forms of energy where the Kyoto Protocol is concerned. It was emphasized that large-scale hydro-electric projects are needed to provide sufficient energy to meet both

  16. Small scale models equal large scale savings

    International Nuclear Information System (INIS)

    Lee, R.; Segroves, R.

    1994-01-01

    A physical scale model of a reactor is a tool which can be used to reduce the time spent by workers in the containment during an outage and thus to reduce the radiation dose and save money. The model can be used for worker orientation, and for planning maintenance, modifications, manpower deployment and outage activities. Examples of the use of models are presented. These were for the La Salle 2 and Dresden 1 and 2 BWRs. In each case cost-effectiveness and exposure reduction due to the use of a scale model is demonstrated. (UK)

  17. Large Scale Glazed Concrete Panels

    DEFF Research Database (Denmark)

    Bache, Anja Margrethe

    2010-01-01

    Today, there is a lot of focus on concrete surface’s aesthitic potential, both globally and locally. World famous architects such as Herzog De Meuron, Zaha Hadid, Richard Meyer and David Chippenfield challenge the exposure of concrete in their architecture. At home, this trend can be seen...... in the crinkly façade of DR-Byen (the domicile of the Danish Broadcasting Company) by architect Jean Nouvel and Zaha Hadid’s Ordrupgård’s black curved smooth concrete surfaces. Furthermore, one can point to initiatives such as “Synlig beton” (visible concrete) that can be seen on the website www.......synligbeton.dk and spæncom’s aesthetic relief effects by the designer Line Kramhøft (www.spaencom.com). It is my hope that the research-development project “Lasting large scale glazed concrete formwork,” I am working on at DTU, department of Architectural Engineering will be able to complement these. It is a project where I...

  18. A Nonnegative Latent Factor Model for Large-Scale Sparse Matrices in Recommender Systems via Alternating Direction Method.

    Science.gov (United States)

    Luo, Xin; Zhou, MengChu; Li, Shuai; You, Zhuhong; Xia, Yunni; Zhu, Qingsheng

    2016-03-01

    Nonnegative matrix factorization (NMF)-based models possess fine representativeness of a target matrix, which is critically important in collaborative filtering (CF)-based recommender systems. However, current NMF-based CF recommenders suffer from the problem of high computational and storage complexity, as well as slow convergence rate, which prevents them from industrial usage in context of big data. To address these issues, this paper proposes an alternating direction method (ADM)-based nonnegative latent factor (ANLF) model. The main idea is to implement the ADM-based optimization with regard to each single feature, to obtain high convergence rate as well as low complexity. Both computational and storage costs of ANLF are linear with the size of given data in the target matrix, which ensures high efficiency when dealing with extremely sparse matrices usually seen in CF problems. As demonstrated by the experiments on large, real data sets, ANLF also ensures fast convergence and high prediction accuracy, as well as the maintenance of nonnegativity constraints. Moreover, it is simple and easy to implement for real applications of learning systems.

  19. Large-scale structure of the Universe

    International Nuclear Information System (INIS)

    Doroshkevich, A.G.

    1978-01-01

    The problems, discussed at the ''Large-scale Structure of the Universe'' symposium are considered on a popular level. Described are the cell structure of galaxy distribution in the Universe, principles of mathematical galaxy distribution modelling. The images of cell structures, obtained after reprocessing with the computer are given. Discussed are three hypothesis - vortical, entropic, adiabatic, suggesting various processes of galaxy and galaxy clusters origin. A considerable advantage of the adiabatic hypothesis is recognized. The relict radiation, as a method of direct studying the processes taking place in the Universe is considered. The large-scale peculiarities and small-scale fluctuations of the relict radiation temperature enable one to estimate the turbance properties at the pre-galaxy stage. The discussion of problems, pertaining to studying the hot gas, contained in galaxy clusters, the interactions within galaxy clusters and with the inter-galaxy medium, is recognized to be a notable contribution into the development of theoretical and observational cosmology

  20. Large scale simulation of liquid water transport in a gas diffusion layer of polymer electrolyte membrane fuel cells using the lattice Boltzmann method

    Science.gov (United States)

    Sakaida, Satoshi; Tabe, Yutaka; Chikahisa, Takemi

    2017-09-01

    A method for the large-scale simulation with the lattice Boltzmann method (LBM) is proposed for liquid water movement in a gas diffusion layer (GDL) of polymer electrolyte membrane fuel cells. The LBM is able to analyze two-phase flows in complex structures, however the simulation domain is limited due to heavy computational loads. This study investigates a variety means to reduce computational loads and increase the simulation areas. One is applying an LBM treating two-phases as having the same density, together with keeping numerical stability with large time steps. The applicability of this approach is confirmed by comparing the results with rigorous simulations using actual density. The second is establishing the maximum limit of the Capillary number that maintains flow patterns similar to the precise simulation; this is attempted as the computational load is inversely proportional to the Capillary number. The results show that the Capillary number can be increased to 3.0 × 10-3, where the actual operation corresponds to Ca = 10-5∼10-8. The limit is also investigated experimentally using an enlarged scale model satisfying similarity conditions for the flow. Finally, a demonstration is made of the effects of pore uniformity in GDL as an example of a large-scale simulation covering a channel.

  1. Stabilization Algorithms for Large-Scale Problems

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg

    2006-01-01

    The focus of the project is on stabilization of large-scale inverse problems where structured models and iterative algorithms are necessary for computing approximate solutions. For this purpose, we study various iterative Krylov methods and their abilities to produce regularized solutions. Some......-curve. This heuristic is implemented as a part of a larger algorithm which is developed in collaboration with G. Rodriguez and P. C. Hansen. Last, but not least, a large part of the project has, in different ways, revolved around the object-oriented Matlab toolbox MOORe Tools developed by PhD Michael Jacobsen. New...

  2. Large scale air pollution estimation method combining land use regression and chemical transport modeling in a geostatistical framework

    NARCIS (Netherlands)

    Akita, Yasuyuki; Baldasano, Jose M.; Beelen, Rob; Cirach, Marta; De Hoogh, Kees; Hoek, Gerard; Nieuwenhuijsen, Mark; Serre, Marc L.; De Nazelle, Audrey

    2014-01-01

    In recognition that intraurban exposure gradients may be as large as between-city variations, recent air pollution epidemiologic studies have become increasingly interested in capturing within-city exposure gradients. In addition, because of the rapidly accumulating health data, recent studies also

  3. Hybrid simulation methods to perform grid integration studies for large scale offshore wind power connected through VSC-HVDC

    NARCIS (Netherlands)

    Meer, van der A.A.; Hendriks, R.L.; Gibescu, M.; Ferreira, J.A.; Kling, W.L.

    2011-01-01

    This paper deals with the inclusion of VSC-HVdc transmission schemes into stability-type simulations by hybrid methods. These methods allow selected parts of the network to be simulated in detail by including electro-magnetic behaviour of devices and network elements whereas the remainder of the

  4. On the solution of large-scale SDP problems by the modified barrier method using iterative solvers

    Czech Academy of Sciences Publication Activity Database

    Kočvara, Michal; Stingl, M.

    2007-01-01

    Roč. 109, 2-3 (2007), s. 413-444 ISSN 0025-5610 R&D Projects: GA AV ČR IAA1075402 Institutional research plan: CEZ:AV0Z10750506 Keywords : semidefinite programming * iterative methods * preconditioned conjugate gradient s * augmented lagrangian methods Subject RIV: BA - General Mathematics Impact factor: 1.475, year: 2007

  5. Traditional methods v. new technologies – dilemmas for dietary assessment in large-scale nutrition surveys and studies

    DEFF Research Database (Denmark)

    Amoutzopoulos, B.; Steer, T.; Roberts, C.

    2018-01-01

    assessment in population surveys’, was held at the 9th International Conference on Diet and Activity Methods (ICDAM9), Brisbane, September 2015. Despite respondent and researcher burden, traditional methods have been most commonly used in nutrition surveys. However, dietary assessment technologies offer...... of traditional dietary assessment methods (food records, FFQ, 24 h recalls, diet history with interviewer-assisted data collection) v. new technology-based dietary assessment methods (web-based and mobile device applications). The panel discussion ‘Traditional methods v. new technologies: dilemmas for dietary......The aim of the present paper is to summarise current and future applications of dietary assessment technologies in nutrition surveys in developed countries. It includes the discussion of key points and highlights of subsequent developments from a panel discussion to address strengths and weaknesses...

  6. Experimental Evaluation of Suitability of Selected Multi-Criteria Decision-Making Methods for Large-Scale Agent-Based Simulations.

    Science.gov (United States)

    Tučník, Petr; Bureš, Vladimír

    2016-01-01

    Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the-server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models.

  7. A mixed-methods study of system-level sustainability of evidence-based practices in 12 large-scale implementation initiatives.

    Science.gov (United States)

    Scudder, Ashley T; Taber-Thomas, Sarah M; Schaffner, Kristen; Pemberton, Joy R; Hunter, Leah; Herschell, Amy D

    2017-12-07

    In recent decades, evidence-based practices (EBPs) have been broadly promoted in community behavioural health systems in the United States of America, yet reported EBP penetration rates remain low. Determining how to systematically sustain EBPs in complex, multi-level service systems has important implications for public health. This study examined factors impacting the sustainability of parent-child interaction therapy (PCIT) in large-scale initiatives in order to identify potential predictors of sustainment. A mixed-methods approach to data collection was used. Qualitative interviews and quantitative surveys examining sustainability processes and outcomes were completed by participants from 12 large-scale initiatives. Sustainment strategies fell into nine categories, including infrastructure, training, marketing, integration and building partnerships. Strategies involving integration of PCIT into existing practices and quality monitoring predicted sustainment, while financing also emerged as a key factor. The reported factors and strategies impacting sustainability varied across initiatives; however, integration into existing practices, monitoring quality and financing appear central to high levels of sustainability of PCIT in community-based systems. More detailed examination of the progression of specific activities related to these strategies may aide in identifying priorities to include in strategic planning of future large-scale initiatives. ClinicalTrials.gov ID NCT02543359 ; Protocol number PRO12060529.

  8. Leveraging Formal Methods and Fuzzing to Verify Security and Reliability Properties of Large-Scale High-Consequence Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ruthruff, Joseph. R. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Armstrong, Robert C. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Davis, Benjamin Garry [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Mayo, Jackson R. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Punnoose, Ratish J. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2012-09-01

    Formal methods describe a class of system analysis techniques that seek to prove specific properties about analyzed designs, or locate flaws compromising those properties. As an analysis capability,these techniques are the subject of increased interest from both internal and external customers of Sandia National Laboratories. Given this lab's other areas of expertise, Sandia is uniquely positioned to advance the state-of-the-art with respect to several research and application areas within formal methods. This research project was a one-year effort funded by Sandia's CyberSecurity S&T Investment Area in its Laboratory Directed Research & Development program to investigate the opportunities for formal methods to impact Sandia's present mission areas, more fully understand the needs of the research community in the area of formal methods and where Sandia can contribute, and clarify from those potential research paths those that would best advance the mission-area interests of Sandia. The accomplishments from this project reinforce the utility of formal methods in Sandia, particularly in areas relevant to Cyber Security, and set the stage for continued Sandia investments to ensure this capabilityis utilized and advanced within this laboratory to serve the national interest.

  9. Application and comparison of large-scale solution-based DNA capture-enrichment methods on ancient DNA

    DEFF Research Database (Denmark)

    Avila Arcos, Maria del Carmen; Cappellini, Enrico; Romero-Navarro, J. Alberto

    2011-01-01

    The development of second-generation sequencing technologies has greatly benefitted the field of ancient DNA (aDNA). Its application can be further exploited by the use of targeted capture-enrichment methods to overcome restrictions posed by low endogenous and contaminating DNA in ancient samples...

  10. Towards large-scale calculations with State-Specific Multireference Coupled Cluster methods: Studies on dodecane, naphthynes, and polycarbenes

    Czech Academy of Sciences Publication Activity Database

    Brabec, Jiří; Bhaskaran-Neir, K.; Kowalski, K.; Pittner, Jiří; van Dam, H. J. J.

    2012-01-01

    Roč. 542, 23 July (2012), s. 128-133 ISSN 0009-2614 R&D Projects: GA ČR GAP208/11/2222 Institutional support: RVO:61388955 Keywords : multireference Coupled Cluster (MRCC) methods * molecular systems * polycarbenes Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.145, year: 2012

  11. Method of complex scaling

    International Nuclear Information System (INIS)

    Braendas, E.

    1986-01-01

    The method of complex scaling is taken to include bound states, resonances, remaining scattering background and interference. Particular points of the general complex coordinate formulation are presented. It is shown that care must be exercised to avoid paradoxical situations resulting from inadequate definitions of operator domains. A new resonance localization theorem is presented

  12. Analytical methods for large-scale sensitivity analysis using GRESS [GRadient Enhanced Software System] and ADGEN [Automated Adjoint Generator

    International Nuclear Information System (INIS)

    Pin, F.G.

    1988-04-01

    Sensitivity analysis is an established methodology used by researchers in almost every field to gain essential insight in design and modeling studies and in performance assessments of complex systems. Conventional sensitivity analysis methodologies, however, have not enjoyed the widespread use they deserve considering the wealth of information they can provide, partly because of their prohibitive cost or the large initial analytical investment they require. Automated systems have recently been developed at ORNL to eliminate these drawbacks. Compilers such as GRESS and ADGEN now allow automatic and cost effective calculation of sensitivities in FORTRAN computer codes. In this paper, these and other related tools are described and their impact and applicability in the general areas of modeling, performance assessment and decision making for radioactive waste isolation problems are discussed. 7 refs., 2 figs

  13. A GPU-based large-scale Monte Carlo simulation method for systems with long-range interactions

    Science.gov (United States)

    Liang, Yihao; Xing, Xiangjun; Li, Yaohang

    2017-06-01

    In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures, and adopts the sequential updating scheme of Metropolis algorithm. It makes no approximation in the computation of energy, and reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We further use this method to simulate primitive model electrolytes, and measure very precisely all ion-ion pair correlation functions at high concentrations. From these data, we extract the renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.

  14. Recombinant hemoglobin II from Lucina pectinata: a large-scale method for hemeprotein expression in E. coli.

    Science.gov (United States)

    Ramos, Cacimar; Pietri, Ruth; Lorenzo, Wilmarie; Roman, Elddie; Granell, Laura B; Cadilla, Carmen L; López-Garriga, Juan

    2010-02-01

    Hemoglobin II from the clam L. pectinata is an O(2) reactive protein that remains oxygenated in the presence of other molecules. To determine the mechanism of ligand selection in this hemoglobin, rHbII was expressed in large quantities using an improved fermentation process. The highest protein yield was obtained by: transforming HbII into the BLi5 cells, inducing and supplementing the culture during the mid-log phase with 1 mM IPTG, 30 microg/mL hemin chloride and 1% glucose, and decreasing the temperature to 30 degrees C after induction. In addition, cell culture density was greatly enhanced by using glycerol, adding MgSO(4), supplementing the media with glucose after the glycerol was consumed and maintaining the dissolved oxygen at 35%. Under these conditions the maximum protein yield obtained was approximately 2,300 mg/L. The results indicate that rHbII is similar to the native protein. The protocol was validated with other hemoglobins, indicating that it can be extended to other hemeproteins.

  15. Robust and efficient direct multiplex amplification method for large-scale DNA detection of blood samples on FTA cards

    International Nuclear Information System (INIS)

    Jiang Bowei; Xiang Fawei; Zhao Xingchun; Wang Lihua; Fan Chunhai

    2013-01-01

    Deoxyribonucleic acid (DNA) damage arising from radiations widely occurred along with the development of nuclear weapons and clinically wide application of computed tomography (CT) scan and nuclear medicine. All ionizing radiations (X-rays, γ-rays, alpha particles, etc.) and ultraviolet (UV) radiation lead to the DNA damage. Polymerase chain reaction (PCR) is one of the most wildly used techniques for detecting DNA damage as the amplification stops at the site of the damage. Improvements to enhance the efficiency of PCR are always required and remain a great challenge. Here we establish a multiplex PCR assay system (MPAS) that is served as a robust and efficient method for direct detection of target DNA sequences in genomic DNA. The establishment of the system is performed by adding a combination of PCR enhancers to standard PCR buffer, The performance of MPAS was demonstrated by carrying out the direct PCR amplification on l.2 mm human blood punch using commercially available primer sets which include multiple primer pairs. The optimized PCR system resulted in high quality genotyping results without any inhibitory effect indicated and led to a full-profile success rate of 98.13%. Our studies demonstrate that the MPAS provides an efficient and robust method for obtaining sensitive, reliable and reproducible PCR results from human blood samples. (authors)

  16. Zero boil-off methods for large-scale liquid hydrogen tanks using integrated refrigeration and storage

    Science.gov (United States)

    Notardonato, W. U.; Swanger, A. M.; E Fesmire, J.; Jumper, K. M.; Johnson, W. L.; Tomsik, T. M.

    2017-12-01

    NASA has completed a series of tests at the Kennedy Space Center to demonstrate the capability of using integrated refrigeration and storage (IRAS) to remove energy from a liquid hydrogen (LH2) tank and control the state of the propellant. A primary test objective was the keeping and storing of the liquid in a zero boil-off state, so that the total heat leak entering the tank is removed by a cryogenic refrigerator with an internal heat exchanger. The LH2 is therefore stored and kept with zero losses for an indefinite period of time. The LH2 tank is a horizontal cylindrical geometry with a vacuum-jacketed, multilayer insulation system and a capacity of 125,000 liters. The closed-loop helium refrigeration system was a Linde LR1620 capable of 390W cooling at 20K (without any liquid nitrogen pre-cooling). Three different control methods were used to obtain zero boil-off: temperature control of the helium refrigerant, refrigerator control using the tank pressure sensor, and duty cycling (on/off) of the refrigerator as needed. Summarized are the IRAS design approach, zero boil-off control methods, and results of the series of zero boil-off tests.

  17. Data and performance profiles applying an adaptive truncation criterion, within linesearch-based truncated Newton methods, in large scale nonconvex optimization

    Directory of Open Access Journals (Sweden)

    Andrea Caliciotti

    2018-04-01

    Full Text Available In this paper, we report data and experiments related to the research article entitled “An adaptive truncation criterion, for linesearch-based truncated Newton methods in large scale nonconvex optimization” by Caliciotti et al. [1]. In particular, in Caliciotti et al. [1], large scale unconstrained optimization problems are considered by applying linesearch-based truncated Newton methods. In this framework, a key point is the reduction of the number of inner iterations needed, at each outer iteration, to approximately solving the Newton equation. A novel adaptive truncation criterion is introduced in Caliciotti et al. [1] to this aim. Here, we report the details concerning numerical experiences over a commonly used test set, namely CUTEst (Gould et al., 2015 [2]. Moreover, comparisons are reported in terms of performance profiles (Dolan and Moré, 2002 [3], adopting different parameters settings. Finally, our linesearch-based scheme is compared with a renowned trust region method, namely TRON (Lin and Moré, 1999 [4].

  18. A parallel electrostatic Particle-in-Cell method on unstructured tetrahedral grids for large-scale bounded collisionless plasma simulations

    Science.gov (United States)

    Averkin, Sergey N.; Gatsonis, Nikolaos A.

    2018-06-01

    An unstructured electrostatic Particle-In-Cell (EUPIC) method is developed on arbitrary tetrahedral grids for simulation of plasmas bounded by arbitrary geometries. The electric potential in EUPIC is obtained on cell vertices from a finite volume Multi-Point Flux Approximation of Gauss' law using the indirect dual cell with Dirichlet, Neumann and external circuit boundary conditions. The resulting matrix equation for the nodal potential is solved with a restarted generalized minimal residual method (GMRES) and an ILU(0) preconditioner algorithm, parallelized using a combination of node coloring and level scheduling approaches. The electric field on vertices is obtained using the gradient theorem applied to the indirect dual cell. The algorithms for injection, particle loading, particle motion, and particle tracking are parallelized for unstructured tetrahedral grids. The algorithms for the potential solver, electric field evaluation, loading, scatter-gather algorithms are verified using analytic solutions for test cases subject to Laplace and Poisson equations. Grid sensitivity analysis examines the L2 and L∞ norms of the relative error in potential, field, and charge density as a function of edge-averaged and volume-averaged cell size. Analysis shows second order of convergence for the potential and first order of convergence for the electric field and charge density. Temporal sensitivity analysis is performed and the momentum and energy conservation properties of the particle integrators in EUPIC are examined. The effects of cell size and timestep on heating, slowing-down and the deflection times are quantified. The heating, slowing-down and the deflection times are found to be almost linearly dependent on number of particles per cell. EUPIC simulations of current collection by cylindrical Langmuir probes in collisionless plasmas show good comparison with previous experimentally validated numerical results. These simulations were also used in a parallelization

  19. Ethics of large-scale change

    OpenAIRE

    Arler, Finn

    2006-01-01

      The subject of this paper is long-term large-scale changes in human society. Some very significant examples of large-scale change are presented: human population growth, human appropriation of land and primary production, the human use of fossil fuels, and climate change. The question is posed, which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, th...

  20. Large-scale synthesis of double cauliflower-like Sb2S3 microcrystallines by hydrothermal method

    International Nuclear Information System (INIS)

    Wu, Lei; Xu, Hanyue; Han, Qiaofeng; Wang, Xin

    2013-01-01

    Highlights: •Highly uniform double cauliflower-like Sb 2 S 3 particles were synthesized via hydrothermal method. •Influence of reaction conditions on the morphology of the products was discussed. •Double cauliflower-like Sb 2 S 3 superstructures revealed broad spectrum response. -- Abstract: The double cauliflower-like Sb 2 S 3 superstructures assembled by nanorods were prepared using SbCl 3 and Na 2 S⋅9H 2 O as raw materials, dodecyltrimethylammonium bromide (DTAB, C 15 H 31 BrN) as surfactant under acidic condition at 180 °C for 30 h. The structure, morphology and composition of the product were characterized by X-ray diffraction pattern (XRD), transmission electron microscopy (TEM), scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS) and energy diffraction spectroscopy (EDS). The effect of reaction conditions including temperature, reaction time and surfactants on the sample morphology was discussed and a possible mechanism for the formation of cauliflower-like Sb 2 S 3 was proposed. The cauliflower-like Sb 2 S 3 microcrystallines revealed broad spectrum response, which may have a good application prospect in solar energy utilization and photoelectric conversion fields

  1. Large scale study of tooth enamel

    International Nuclear Information System (INIS)

    Bodart, F.; Deconninck, G.; Martin, M.T.

    Human tooth enamel contains traces of foreign elements. The presence of these elements is related to the history and the environment of the human body and can be considered as the signature of perturbations which occur during the growth of a tooth. A map of the distribution of these traces on a large scale sample of the population will constitute a reference for further investigations of environmental effects. On hundred eighty samples of teeth were first analyzed using PIXE, backscattering and nuclear reaction techniques. The results were analyzed using statistical methods. Correlations between O, F, Na, P, Ca, Mn, Fe, Cu, Zn, Pb and Sr were observed and cluster analysis was in progress. The techniques described in the present work have been developed in order to establish a method for the exploration of very large samples of the Belgian population. (author)

  2. A Normalization-Free and Nonparametric Method Sharpens Large-Scale Transcriptome Analysis and Reveals Common Gene Alteration Patterns in Cancers.

    Science.gov (United States)

    Li, Qi-Gang; He, Yong-Han; Wu, Huan; Yang, Cui-Ping; Pu, Shao-Yan; Fan, Song-Qing; Jiang, Li-Ping; Shen, Qiu-Shuo; Wang, Xiao-Xiong; Chen, Xiao-Qiong; Yu, Qin; Li, Ying; Sun, Chang; Wang, Xiangting; Zhou, Jumin; Li, Hai-Peng; Chen, Yong-Bin; Kong, Qing-Peng

    2017-01-01

    Heterogeneity in transcriptional data hampers the identification of differentially expressed genes (DEGs) and understanding of cancer, essentially because current methods rely on cross-sample normalization and/or distribution assumption-both sensitive to heterogeneous values. Here, we developed a new method, Cross-Value Association Analysis (CVAA), which overcomes the limitation and is more robust to heterogeneous data than the other methods. Applying CVAA to a more complex pan-cancer dataset containing 5,540 transcriptomes discovered numerous new DEGs and many previously rarely explored pathways/processes; some of them were validated, both in vitro and in vivo , to be crucial in tumorigenesis, e.g., alcohol metabolism ( ADH1B ), chromosome remodeling ( NCAPH ) and complement system ( Adipsin ). Together, we present a sharper tool to navigate large-scale expression data and gain new mechanistic insights into tumorigenesis.

  3. Canonical symplectic particle-in-cell method for long-term large-scale simulations of the Vlasov–Maxwell equations

    Energy Technology Data Exchange (ETDEWEB)

    Qin, Hong; Liu, Jian; Xiao, Jianyuan; Zhang, Ruili; He, Yang; Wang, Yulei; Sun, Yajuan; Burby, Joshua W.; Ellison, Leland; Zhou, Yao

    2015-12-14

    Particle-in-cell (PIC) simulation is the most important numerical tool in plasma physics. However, its long-term accuracy has not been established. To overcome this difficulty, we developed a canonical symplectic PIC method for the Vlasov-Maxwell system by discretising its canonical Poisson bracket. A fast local algorithm to solve the symplectic implicit time advance is discovered without root searching or global matrix inversion, enabling applications of the proposed method to very large-scale plasma simulations with many, e.g. 10(9), degrees of freedom. The long-term accuracy and fidelity of the algorithm enables us to numerically confirm Mouhot and Villani's theory and conjecture on nonlinear Landau damping over several orders of magnitude using the PIC method, and to calculate the nonlinear evolution of the reflectivity during the mode conversion process from extraordinary waves to Bernstein waves.

  4. Superconducting materials for large scale applications

    International Nuclear Information System (INIS)

    Dew-Hughes, D.

    1975-01-01

    Applications of superconductors capable of carrying large current densities in large-scale electrical devices are examined. Discussions are included on critical current density, superconducting materials available, and future prospects for improved superconducting materials. (JRD)

  5. Large-scale solar heating

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Advanced Energy Systems

    1998-10-01

    Solar heating market is growing in many European countries and annually installed collector area has exceeded one million square meters. There are dozens of collector manufacturers and hundreds of firms making solar heating installations in Europe. One tendency in solar heating is towards larger systems. These can be roof integrated, consisting of some tens or hundreds of square meters of collectors, or they can be larger centralized solar district heating plants consisting of a few thousand square meters of collectors. The increase of size can reduce the specific investments of solar heating systems, because e.g. the costs of some components (controllers, pumps, and pipes), planning and installation can be smaller in larger systems. The solar heat output can also be higher in large systems, because more advanced technique is economically viable

  6. Concepts for Large Scale Hydrogen Production

    OpenAIRE

    Jakobsen, Daniel; Åtland, Vegar

    2016-01-01

    The objective of this thesis is to perform a techno-economic analysis of large-scale, carbon-lean hydrogen production in Norway, in order to evaluate various production methods and estimate a breakeven price level. Norway possesses vast energy resources and the export of oil and gas is vital to the country s economy. The results of this thesis indicate that hydrogen represents a viable, carbon-lean opportunity to utilize these resources, which can prove key in the future of Norwegian energy e...

  7. Large scale phononic metamaterials for seismic isolation

    International Nuclear Information System (INIS)

    Aravantinos-Zafiris, N.; Sigalas, M. M.

    2015-01-01

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials

  8. A flexible and cost-effective compensation method for leveling using large-scale coordinate measuring machines and its application in aircraft digital assembly

    Science.gov (United States)

    Deng, Zhengping; Li, Shuanggao; Huang, Xiang

    2018-06-01

    In the assembly process of large-size aerospace products, the leveling and horizontal alignment of large components are essential prior to the installation of an inertial navigation system (INS) and the final quality inspection. In general, the inherent coordinate systems of large-scale coordinate measuring devices are not coincident with the geodetic horizontal system, and a dual-axis compensation system is commonly required for the measurement of difference in heights. These compensation systems are expensive and dedicated designs for different devices at present. Considering that a large-size assembly site usually needs more than one measuring device, a compensation approach which is versatile for different devices would be a more convenient and economic choice for manufacturers. In this paper, a flexible and cost-effective compensation method is proposed. Firstly, an auxiliary measuring device called a versatile compensation fixture (VCF) is designed, which mainly comprises reference points for coordinate transformation and a dual-axis inclinometer, and a kind of network tighten points (NTPs) are introduced and temporarily deployed in the large measuring space to further reduce transformation error. Secondly, the measuring principle of height difference is studied, based on coordinate transformation theory and trigonometry while considering the effects of earth curvature, and the coordinate transformation parameters are derived by least squares adjustment. Thirdly, the analytical solution of leveling uncertainty is analyzed, based on which the key parameters of the VCF and the proper deployment of NTPs are determined according to the leveling accuracy requirement. Furthermore, the proposed method is practically applied to the assembly of a large helicopter by developing an automatic leveling and alignment system. By measuring four NTPs, the leveling uncertainty (2σ) is reduced by 29.4% to about 0.12 mm, compared with that without NTPs.

  9. Constructing sites on a large scale

    DEFF Research Database (Denmark)

    Braae, Ellen Marie; Tietjen, Anne

    2011-01-01

    Since the 1990s, the regional scale has regained importance in urban and landscape design. In parallel, the focus in design tasks has shifted from master plans for urban extension to strategic urban transformation projects. A prominent example of a contemporary spatial development approach...... for setting the design brief in a large scale urban landscape in Norway, the Jaeren region around the city of Stavanger. In this paper, we first outline the methodological challenges and then present and discuss the proposed method based on our teaching experiences. On this basis, we discuss aspects...... is the IBA Emscher Park in the Ruhr area in Germany. Over a 10 years period (1988-1998), more than a 100 local transformation projects contributed to the transformation from an industrial to a post-industrial region. The current paradigm of planning by projects reinforces the role of the design disciplines...

  10. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  11. Large-scale perspective as a challenge

    NARCIS (Netherlands)

    Plomp, M.G.A.

    2012-01-01

    1. Scale forms a challenge for chain researchers: when exactly is something ‘large-scale’? What are the underlying factors (e.g. number of parties, data, objects in the chain, complexity) that determine this? It appears to be a continuum between small- and large-scale, where positioning on that

  12. Sentiment analysis methods for understanding large-scale texts: a case for using continuum-scored words and word shift graphs

    Directory of Open Access Journals (Sweden)

    Andrew J Reagan

    2017-10-01

    Full Text Available Abstract The emergence and global adoption of social media has rendered possible the real-time estimation of population-scale sentiment, an extraordinary capacity which has profound implications for our understanding of human behavior. Given the growing assortment of sentiment-measuring instruments, it is imperative to understand which aspects of sentiment dictionaries contribute to both their classification accuracy and their ability to provide richer understanding of texts. Here, we perform detailed, quantitative tests and qualitative assessments of 6 dictionary-based methods applied to 4 different corpora, and briefly examine a further 20 methods. We show that while inappropriate for sentences, dictionary-based methods are generally robust in their classification accuracy for longer texts. Most importantly they can aid understanding of texts with reliable and meaningful word shift graphs if (1 the dictionary covers a sufficiently large portion of a given text’s lexicon when weighted by word usage frequency; and (2 words are scored on a continuous scale.

  13. a Novel Ship Detection Method for Large-Scale Optical Satellite Images Based on Visual Lbp Feature and Visual Attention Model

    Science.gov (United States)

    Haigang, Sui; Zhina, Song

    2016-06-01

    Reliably ship detection in optical satellite images has a wide application in both military and civil fields. However, this problem is very difficult in complex backgrounds, such as waves, clouds, and small islands. Aiming at these issues, this paper explores an automatic and robust model for ship detection in large-scale optical satellite images, which relies on detecting statistical signatures of ship targets, in terms of biologically-inspired visual features. This model first selects salient candidate regions across large-scale images by using a mechanism based on biologically-inspired visual features, combined with visual attention model with local binary pattern (CVLBP). Different from traditional studies, the proposed algorithm is high-speed and helpful to focus on the suspected ship areas avoiding the separation step of land and sea. Largearea images are cut into small image chips and analyzed in two complementary ways: Sparse saliency using visual attention model and detail signatures using LBP features, thus accordant with sparseness of ship distribution on images. Then these features are employed to classify each chip as containing ship targets or not, using a support vector machine (SVM). After getting the suspicious areas, there are still some false alarms such as microwaves and small ribbon clouds, thus simple shape and texture analysis are adopted to distinguish between ships and nonships in suspicious areas. Experimental results show the proposed method is insensitive to waves, clouds, illumination and ship size.

  14. Large-scale grid management; Storskala Nettforvaltning

    Energy Technology Data Exchange (ETDEWEB)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-07-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series.

  15. Multidimensional scaling for large genomic data sets

    Directory of Open Access Journals (Sweden)

    Lu Henry

    2008-04-01

    Full Text Available Abstract Background Multi-dimensional scaling (MDS is aimed to represent high dimensional data in a low dimensional space with preservation of the similarities between data points. This reduction in dimensionality is crucial for analyzing and revealing the genuine structure hidden in the data. For noisy data, dimension reduction can effectively reduce the effect of noise on the embedded structure. For large data set, dimension reduction can effectively reduce information retrieval complexity. Thus, MDS techniques are used in many applications of data mining and gene network research. However, although there have been a number of studies that applied MDS techniques to genomics research, the number of analyzed data points was restricted by the high computational complexity of MDS. In general, a non-metric MDS method is faster than a metric MDS, but it does not preserve the true relationships. The computational complexity of most metric MDS methods is over O(N2, so that it is difficult to process a data set of a large number of genes N, such as in the case of whole genome microarray data. Results We developed a new rapid metric MDS method with a low computational complexity, making metric MDS applicable for large data sets. Computer simulation showed that the new method of split-and-combine MDS (SC-MDS is fast, accurate and efficient. Our empirical studies using microarray data on the yeast cell cycle showed that the performance of K-means in the reduced dimensional space is similar to or slightly better than that of K-means in the original space, but about three times faster to obtain the clustering results. Our clustering results using SC-MDS are more stable than those in the original space. Hence, the proposed SC-MDS is useful for analyzing whole genome data. Conclusion Our new method reduces the computational complexity from O(N3 to O(N when the dimension of the feature space is far less than the number of genes N, and it successfully

  16. Algorithm 896: LSA: Algorithms for Large-Scale Optimization

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2009-01-01

    Roč. 36, č. 3 (2009), 16-1-16-29 ISSN 0098-3500 R&D Pro jects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : algorithms * design * large-scale optimization * large-scale nonsmooth optimization * large-scale nonlinear least squares * large-scale nonlinear minimax * large-scale systems of nonlinear equations * sparse pro blems * partially separable pro blems * limited-memory methods * discrete Newton methods * quasi-Newton methods * primal interior-point methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.904, year: 2009

  17. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  18. A revised method of presenting wavenumber-frequency power spectrum diagrams that reveals the asymmetric nature of tropical large-scale waves

    Energy Technology Data Exchange (ETDEWEB)

    Chao, Winston C. [NASA/Goddard Space Flight Center, Global Modeling and Assimilation Office, Mail Code 610.1, Greenbelt, MD (United States); Yang, Bo; Fu, Xiouhua [University of Hawaii at Manoa, School of Ocean and Earth Science and Technology, International Pacific Research Center, Honolulu, HI (United States)

    2009-11-15

    The popular method of presenting wavenumber-frequency power spectrum diagrams for studying tropical large-scale waves in the literature is shown to give an incomplete presentation of these waves. The so-called ''convectively coupled Kelvin (mixed Rossby-gravity) waves'' are presented as existing only in the symmetric (anti-symmetric) component of the diagrams. This is obviously not consistent with the published composite/regression studies of ''convectively coupled Kelvin waves,'' which illustrate the asymmetric nature of these waves. The cause of this inconsistency is revealed in this note and a revised method of presenting the power spectrum diagrams is proposed. When this revised method is used, ''convectively coupled Kelvin waves'' do show anti-symmetric components, and ''convectively coupled mixed Rossby-gravity waves (also known as Yanai waves)'' do show a hint of symmetric components. These results bolster a published proposal that these waves should be called ''chimeric Kelvin waves,'' ''chimeric mixed Rossby-gravity waves,'' etc. This revised method of presenting power spectrum diagrams offers an additional means of comparing the GCM output with observations by calling attention to the capability of GCMs to correctly simulate the asymmetric characteristics of equatorial waves. (orig.)

  19. Large scale analysis of signal reachability.

    Science.gov (United States)

    Todor, Andrei; Gabr, Haitham; Dobra, Alin; Kahveci, Tamer

    2014-06-15

    Major disorders, such as leukemia, have been shown to alter the transcription of genes. Understanding how gene regulation is affected by such aberrations is of utmost importance. One promising strategy toward this objective is to compute whether signals can reach to the transcription factors through the transcription regulatory network (TRN). Due to the uncertainty of the regulatory interactions, this is a #P-complete problem and thus solving it for very large TRNs remains to be a challenge. We develop a novel and scalable method to compute the probability that a signal originating at any given set of source genes can arrive at any given set of target genes (i.e., transcription factors) when the topology of the underlying signaling network is uncertain. Our method tackles this problem for large networks while providing a provably accurate result. Our method follows a divide-and-conquer strategy. We break down the given network into a sequence of non-overlapping subnetworks such that reachability can be computed autonomously and sequentially on each subnetwork. We represent each interaction using a small polynomial. The product of these polynomials express different scenarios when a signal can or cannot reach to target genes from the source genes. We introduce polynomial collapsing operators for each subnetwork. These operators reduce the size of the resulting polynomial and thus the computational complexity dramatically. We show that our method scales to entire human regulatory networks in only seconds, while the existing methods fail beyond a few tens of genes and interactions. We demonstrate that our method can successfully characterize key reachability characteristics of the entire transcriptions regulatory networks of patients affected by eight different subtypes of leukemia, as well as those from healthy control samples. All the datasets and code used in this article are available at bioinformatics.cise.ufl.edu/PReach/scalable.htm. © The Author 2014

  20. The large-scale blast score ratio (LS-BSR pipeline: a method to rapidly compare genetic content between bacterial genomes

    Directory of Open Access Journals (Sweden)

    Jason W. Sahl

    2014-04-01

    Full Text Available Background. As whole genome sequence data from bacterial isolates becomes cheaper to generate, computational methods are needed to correlate sequence data with biological observations. Here we present the large-scale BLAST score ratio (LS-BSR pipeline, which rapidly compares the genetic content of hundreds to thousands of bacterial genomes, and returns a matrix that describes the relatedness of all coding sequences (CDSs in all genomes surveyed. This matrix can be easily parsed in order to identify genetic relationships between bacterial genomes. Although pipelines have been published that group peptides by sequence similarity, no other software performs the rapid, large-scale, full-genome comparative analyses carried out by LS-BSR.Results. To demonstrate the utility of the method, the LS-BSR pipeline was tested on 96 Escherichia coli and Shigella genomes; the pipeline ran in 163 min using 16 processors, which is a greater than 7-fold speedup compared to using a single processor. The BSR values for each CDS, which indicate a relative level of relatedness, were then mapped to each genome on an independent core genome single nucleotide polymorphism (SNP based phylogeny. Comparisons were then used to identify clade specific CDS markers and validate the LS-BSR pipeline based on molecular markers that delineate between classical E. coli pathogenic variant (pathovar designations. Scalability tests demonstrated that the LS-BSR pipeline can process 1,000 E. coli genomes in 27–57 h, depending upon the alignment method, using 16 processors.Conclusions. LS-BSR is an open-source, parallel implementation of the BSR algorithm, enabling rapid comparison of the genetic content of large numbers of genomes. The results of the pipeline can be used to identify specific markers between user-defined phylogenetic groups, and to identify the loss and/or acquisition of genetic information between bacterial isolates. Taxa-specific genetic markers can then be translated

  1. Large-scale numerical simulations of plasmas

    International Nuclear Information System (INIS)

    Hamaguchi, Satoshi

    2004-01-01

    The recent trend of large scales simulations of fusion plasma and processing plasmas is briefly summarized. Many advanced simulation techniques have been developed for fusion plasmas and some of these techniques are now applied to analyses of processing plasmas. (author)

  2. Large-scale computing with Quantum Espresso

    International Nuclear Information System (INIS)

    Giannozzi, P.; Cavazzoni, C.

    2009-01-01

    This paper gives a short introduction to Quantum Espresso: a distribution of software for atomistic simulations in condensed-matter physics, chemical physics, materials science, and to its usage in large-scale parallel computing.

  3. Large-scale regions of antimatter

    International Nuclear Information System (INIS)

    Grobov, A. V.; Rubin, S. G.

    2015-01-01

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era

  4. Large-scale regions of antimatter

    Energy Technology Data Exchange (ETDEWEB)

    Grobov, A. V., E-mail: alexey.grobov@gmail.com; Rubin, S. G., E-mail: sgrubin@mephi.ru [National Research Nuclear University MEPhI (Russian Federation)

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  5. Large Scale EOF Analysis of Climate Data

    Science.gov (United States)

    Prabhat, M.; Gittens, A.; Kashinath, K.; Cavanaugh, N. R.; Mahoney, M.

    2016-12-01

    We present a distributed approach towards extracting EOFs from 3D climate data. We implement the method in Apache Spark, and process multi-TB sized datasets on O(1000-10,000) cores. We apply this method to latitude-weighted ocean temperature data from CSFR, a 2.2 terabyte-sized data set comprising ocean and subsurface reanalysis measurements collected at 41 levels in the ocean, at 6 hour intervals over 31 years. We extract the first 100 EOFs of this full data set and compare to the EOFs computed simply on the surface temperature field. Our analyses provide evidence of Kelvin and Rossy waves and components of large-scale modes of oscillation including the ENSO and PDO that are not visible in the usual SST EOFs. Further, they provide information on the the most influential parts of the ocean, such as the thermocline, that exist below the surface. Work is ongoing to understand the factors determining the depth-varying spatial patterns observed in the EOFs. We will experiment with weighting schemes to appropriately account for the differing depths of the observations. We also plan to apply the same distributed approach to analysis of analysis of 3D atmospheric climatic data sets, including multiple variables. Because the atmosphere changes on a quicker time-scale than the ocean, we expect that the results will demonstrate an even greater advantage to computing 3D EOFs in lieu of 2D EOFs.

  6. Political consultation and large-scale research

    International Nuclear Information System (INIS)

    Bechmann, G.; Folkers, H.

    1977-01-01

    Large-scale research and policy consulting have an intermediary position between sociological sub-systems. While large-scale research coordinates science, policy, and production, policy consulting coordinates science, policy and political spheres. In this very position, large-scale research and policy consulting lack of institutional guarantees and rational back-ground guarantee which are characteristic for their sociological environment. This large-scale research can neither deal with the production of innovative goods under consideration of rentability, nor can it hope for full recognition by the basis-oriented scientific community. Policy consulting knows neither the competence assignment of the political system to make decisions nor can it judge succesfully by the critical standards of the established social science, at least as far as the present situation is concerned. This intermediary position of large-scale research and policy consulting has, in three points, a consequence supporting the thesis which states that this is a new form of institutionalization of science: These are: 1) external control, 2) the organization form, 3) the theoretical conception of large-scale research and policy consulting. (orig.) [de

  7. Grid sensitivity capability for large scale structures

    Science.gov (United States)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  8. Large-scale sequential quadratic programming algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  9. Analysis using large-scale ringing data

    Directory of Open Access Journals (Sweden)

    Baillie, S. R.

    2004-06-01

    ]; Peach et al., 1998; DeSante et al., 2001 are generally co–ordinated by ringing centres such as those that make up the membership of EURING. In some countries volunteer census work (often called Breeding Bird Surveys is undertaken by the same organizations while in others different bodies may co–ordinate this aspect of the work. This session was concerned with the analysis of such extensive data sets and the approaches that are being developed to address the key theoretical and applied issues outlined above. The papers reflect the development of more spatially explicit approaches to analyses of data gathered at large spatial scales. They show that while the statistical tools that have been developed in recent years can be used to derive useful biological conclusions from such data, there is additional need for further developments. Future work should also consider how to best implement such analytical developments within future study designs. In his plenary paper Andy Royle (Royle, 2004 addresses this theme directly by describing a general framework for modelling spatially replicated abundance data. The approach is based on the idea that a set of spatially referenced local populations constitutes a metapopulation, within which local abundance is determined as a random process. This provides an elegant and general approach in which the metapopulation model as described above is combined with a data–generating model specific to the type of data being analysed to define a simple hierarchical model that can be analysed using conventional methods. It should be noted, however, that further software development will be needed if the approach is to be made readily available to biologists. The approach is well suited to dealing with sparse data and avoids the need for data aggregation prior to analysis. Spatial synchrony has received most attention in studies of species whose populations show cyclic fluctuations, particularly certain game birds and small mammals. However

  10. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data

  11. An Ensemble Three-Dimensional Constrained Variational Analysis Method to Derive Large-Scale Forcing Data for Single-Column Models

    Science.gov (United States)

    Tang, Shuaiqi

    Atmospheric vertical velocities and advective tendencies are essential as large-scale forcing data to drive single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulations (LES). They cannot be directly measured or easily calculated with great accuracy from field measurements. In the Atmospheric Radiation Measurement (ARM) program, a constrained variational algorithm (1DCVA) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). We extend the 1DCVA algorithm into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data. We also introduce an ensemble framework using different background data, error covariance matrices and constraint variables to quantify the uncertainties of the large-scale forcing data. The results of sensitivity study show that the derived forcing data and SCM simulated clouds are more sensitive to the background data than to the error covariance matrices and constraint variables, while horizontal moisture advection has relatively large sensitivities to the precipitation, the dominate constraint variable. Using a mid-latitude cyclone case study in March 3rd, 2000 at the ARM Southern Great Plains (SGP) site, we investigate the spatial distribution of diabatic heating sources (Q1) and moisture sinks (Q2), and show that they are consistent with the satellite clouds and intuitive structure of the mid-latitude cyclone. We also evaluate the Q1 and Q2 in analysis/reanalysis, finding that the regional analysis/reanalysis all tend to underestimate the sub-grid scale upward transport of moist static energy in the lower troposphere. With the uncertainties from large-scale forcing data and observation specified, we compare SCM results and observations and find that models have large biases on cloud properties which could not be fully explained by the uncertainty from the large-scale forcing

  12. Sensitivity technologies for large scale simulation

    International Nuclear Information System (INIS)

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  13. Large-Scale Outflows in Seyfert Galaxies

    Science.gov (United States)

    Colbert, E. J. M.; Baum, S. A.

    1995-12-01

    \\catcode`\\@=11 \\ialign{m @th#1hfil ##hfil \\crcr#2\\crcr\\sim\\crcr}}} \\catcode`\\@=12 Highly collimated outflows extend out to Mpc scales in many radio-loud active galaxies. In Seyfert galaxies, which are radio-quiet, the outflows extend out to kpc scales and do not appear to be as highly collimated. In order to study the nature of large-scale (>~1 kpc) outflows in Seyferts, we have conducted optical, radio and X-ray surveys of a distance-limited sample of 22 edge-on Seyfert galaxies. Results of the optical emission-line imaging and spectroscopic survey imply that large-scale outflows are present in >~{{1} /{4}} of all Seyferts. The radio (VLA) and X-ray (ROSAT) surveys show that large-scale radio and X-ray emission is present at about the same frequency. Kinetic luminosities of the outflows in Seyferts are comparable to those in starburst-driven superwinds. Large-scale radio sources in Seyferts appear diffuse, but do not resemble radio halos found in some edge-on starburst galaxies (e.g. M82). We discuss the feasibility of the outflows being powered by the active nucleus (e.g. a jet) or a circumnuclear starburst.

  14. Evaluation of a multi-atlas based method for segmentation of cardiac CTA data: a large-scale, multicenter, and multivendor study

    International Nuclear Information System (INIS)

    Kirisli, H. A.; Schaap, M.; Klein, S.; Papadopoulou, S. L.; Bonardi, M.; Chen, C. H.; Weustink, A. C.; Mollet, N. R.; Vonken, E. J.; Geest, R. J. van der; Walsum, T. van; Niessen, W. J.

    2010-01-01

    Purpose: Computed tomography angiography (CTA) is increasingly used for the diagnosis of coronary artery disease (CAD). However, CTA is not commonly used for the assessment of ventricular and atrial function, although functional information extracted from CTA data is expected to improve the diagnostic value of the examination. In clinical practice, the extraction of ventricular and atrial functional information, such as stroke volume and ejection fraction, requires accurate delineation of cardiac chambers. In this paper, we investigated the accuracy and robustness of cardiac chamber delineation using a multiatlas based segmentation method on multicenter and multivendor CTA data. Methods: A fully automatic multiatlas based method for segmenting the whole heart (i.e., the outer surface of the pericardium) and cardiac chambers from CTA data is presented and evaluated. In the segmentation approach, eight atlas images are registered to a new patient's CTA scan. The eight corresponding manually labeled images are then propagated and combined using a per voxel majority voting procedure, to obtain a cardiac segmentation. Results: The method was evaluated on a multicenter/multivendor database, consisting of (1) a set of 1380 Siemens scans from 795 patients and (2) a set of 60 multivendor scans (Siemens, Philips, and GE) from different patients, acquired in six different institutions worldwide. A leave-one-out 3D quantitative validation was carried out on the eight atlas images; we obtained a mean surface-to-surface error of 0.94±1.12 mm and an average Dice coefficient of 0.93 was achieved. A 2D quantitative evaluation was performed on the 60 multivendor data sets. Here, we observed a mean surface-to-surface error of 1.26±1.25 mm and an average Dice coefficient of 0.91 was achieved. In addition to this quantitative evaluation, a large-scale 2D and 3D qualitative evaluation was performed on 1380 and 140 images, respectively. Experts evaluated that 49% of the 1380 images

  15. Dissecting the large-scale galactic conformity

    Science.gov (United States)

    Seo, Seongu

    2018-01-01

    Galactic conformity is an observed phenomenon that galaxies located in the same region have similar properties such as star formation rate, color, gas fraction, and so on. The conformity was first observed among galaxies within in the same halos (“one-halo conformity”). The one-halo conformity can be readily explained by mutual interactions among galaxies within a halo. Recent observations however further witnessed a puzzling connection among galaxies with no direct interaction. In particular, galaxies located within a sphere of ~5 Mpc radius tend to show similarities, even though the galaxies do not share common halos with each other ("two-halo conformity" or “large-scale conformity”). Using a cosmological hydrodynamic simulation, Illustris, we investigate the physical origin of the two-halo conformity and put forward two scenarios. First, back-splash galaxies are likely responsible for the large-scale conformity. They have evolved into red galaxies due to ram-pressure stripping in a given galaxy cluster and happen to reside now within a ~5 Mpc sphere. Second, galaxies in strong tidal field induced by large-scale structure also seem to give rise to the large-scale conformity. The strong tides suppress star formation in the galaxies. We discuss the importance of the large-scale conformity in the context of galaxy evolution.

  16. Development of tsunami fragility evaluation methods by large scale experiments. Part 2. Validation of the applicability of evaluation methods of impact force due to tsunami floating debris

    International Nuclear Information System (INIS)

    Takabatake, Daisuke; Kihara, Naoto; Kaida, Hideki; Miyagawa, Yoshinori; Ikeno, Masaaki; Shibayama, Atsushi

    2015-01-01

    In order to examine the applicability of the existing estimation equations of the impact force due to tsunami floating debris, the collision tests are carried out. In the experiments, logs and full-scale light car are used. In this report, two types of existing equations, one is based on the Young's module of the debris (Eq.A) and the other one is based on the stiffness of the debris (Eq.B), are focused on. The estimated impact forces using Eq.A with log's Young module obtained by the material test agree with measured forces obtained by the collision test. But Eq.A does not applicate to a car because it is not easy to determine the Young's module of a car. On the other hand, the estimated impact forces using Eq.B with car's stiffness obtained by the static loading test agree with measured forces obtained by the collision test. This indicates that Eq.B unable us to estimate impact force of the floating debris such as car if the stiffness of the debris is determined. (author)

  17. Growth Limits in Large Scale Networks

    DEFF Research Database (Denmark)

    Knudsen, Thomas Phillip

    limitations. The rising complexity of network management with the convergence of communications platforms is shown as problematic for both automatic management feasibility and for manpower resource management. In the fourth step the scope is extended to include the present society with the DDN project as its......The Subject of large scale networks is approached from the perspective of the network planner. An analysis of the long term planning problems is presented with the main focus on the changing requirements for large scale networks and the potential problems in meeting these requirements. The problems...... the fundamental technological resources in network technologies are analysed for scalability. Here several technological limits to continued growth are presented. The third step involves a survey of major problems in managing large scale networks given the growth of user requirements and the technological...

  18. Accelerating sustainability in large-scale facilities

    CERN Multimedia

    Marina Giampietro

    2011-01-01

    Scientific research centres and large-scale facilities are intrinsically energy intensive, but how can big science improve its energy management and eventually contribute to the environmental cause with new cleantech? CERN’s commitment to providing tangible answers to these questions was sealed in the first workshop on energy management for large scale scientific infrastructures held in Lund, Sweden, on the 13-14 October.   Participants at the energy management for large scale scientific infrastructures workshop. The workshop, co-organised with the European Spallation Source (ESS) and  the European Association of National Research Facilities (ERF), tackled a recognised need for addressing energy issues in relation with science and technology policies. It brought together more than 150 representatives of Research Infrastrutures (RIs) and energy experts from Europe and North America. “Without compromising our scientific projects, we can ...

  19. Gamma-H2AX biodosimetry for use in large scale radiation incidents: comparison of a rapid ‘96 well lyse/fix’ protocol with a routine method

    Directory of Open Access Journals (Sweden)

    Jayne Moquet

    2014-03-01

    Full Text Available Following a radiation incident, preliminary dose estimates made by γ-H2AX foci analysis can supplement the early triage of casualties based on clinical symptoms. Sample processing time is important when many individuals need to be rapidly assessed. A protocol was therefore developed for high sample throughput that requires less than 0.1 ml blood, thus potentially enabling finger prick sampling. The technique combines red blood cell lysis and leukocyte fixation in one step on a 96 well plate, in contrast to the routine protocol, where lymphocytes in larger blood volumes are typically separated by Ficoll density gradient centrifugation with subsequent washing and fixation steps. The rapid ‘96 well lyse/fix’ method reduced the estimated sample processing time for 96 samples to about 4 h compared to 15 h using the routine protocol. However, scoring 20 cells in 96 samples prepared by the rapid protocol took longer than for the routine method (3.1 versus 1.5 h at zero dose; 7.0 versus 6.1 h for irradiated samples. Similar foci yields were scored for both protocols and consistent dose estimates were obtained for samples exposed to 0, 0.2, 0.6, 1.1, 1.2, 2.1 and 4.3 Gy of 250 kVp X-rays at 0.5 Gy/min and incubated for 2 h. Linear regression coefficients were 0.87 ± 0.06 (R2 = 97.6% and 0.85 ± 0.05 (R2 = 98.3% for estimated versus actual doses for the routine and lyse/fix method, respectively. The lyse/fix protocol can therefore facilitate high throughput processing for γ-H2AX biodosimetry for use in large scale radiation incidents, at the cost of somewhat longer foci scoring times.

  20. GPU-based large-scale visualization

    KAUST Repository

    Hadwiger, Markus

    2013-11-19

    Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size. The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections. You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization. We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous

  1. Large-Scale Analysis of Art Proportions

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2014-01-01

    While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square) and with majo......While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square...

  2. The Expanded Large Scale Gap Test

    Science.gov (United States)

    1987-03-01

    NSWC TR 86-32 DTIC THE EXPANDED LARGE SCALE GAP TEST BY T. P. LIDDIARD D. PRICE RESEARCH AND TECHNOLOGY DEPARTMENT ’ ~MARCH 1987 Ap~proved for public...arises, to reduce the spread in the LSGT 50% gap value.) The worst charges, such as those with the highest or lowest densities, the largest re-pressed...Arlington, VA 22217 PE 62314N INS3A 1 RJ14E31 7R4TBK 11 TITLE (Include Security CIlmsilficatiorn The Expanded Large Scale Gap Test . 12. PEIRSONAL AUTHOR() T

  3. Residual stress measurement of large scaled welded pipe using neutron diffraction method. Effect of SCC crack propagation and repair weld on residual stress distribution

    International Nuclear Information System (INIS)

    Suzuki, Hiroshi; Katsuyama, Jinya; Tobita, Tohru; Morii, Yukio

    2011-01-01

    The RESA-1 neutron engineering diffractometer in the JRR-3 (Japan Research Reactor No.3) at the Japan Atomic Energy Agency, which is used for stress measurements, was upgraded to realize residual stress measurements of large scaled mechanical components. A series of residual stress measurements was made to obtain through-thickness residual stress distributions in a Type 304 stainless steel butt-welded pipe of 500A-sch.80 using the upgraded RESA-1 diffractometer. We evaluated effects of crack propagation such as stress corrosion cracking (SCC) and a part-circumference repair weld on the residual stress distributions induced by girth welding. Measured residual stress distributions near original girth weld revealed good agreement with typical results shown in some previous works using finite element method, deep hole drilling as well as neutron diffraction. After introducing a mock crack with 10 mm depth in the heat affected zone on the inside wall of the pipe by electro discharge machining, the axial residual stresses were found to be released in the part of the mock crack. However, changes in the through-wall bending stress component and the self-equilibrated stress component were negligible and hence the axial residual stress distribution in the ligament was remained in the original residual stresses near girth weld without the mock crack. Furthermore, changes in hoop and radial residual stress were also small. The residual stress distributions after a part repair welding on the outer circumference of the girth weld were significantly different from residual stress distributions near the original girth weld. The through-thickness average axial residual stress was increased due to increase of the tensile membrane stress and mitigation of the bending stress after repair welding. Throughout above studies, we evidenced that the neutron diffraction technique is useful and powerful tool for measuring residual stress distributions in large as well as thick mechanical

  4. Large-scale synthesis of Pb1-xLa xTiO3 ceramic powders by molten salt method

    International Nuclear Information System (INIS)

    Cai Zongying; Xing Xianran; Yu Ranbo; Liu Guirong; Xing Qifeng

    2006-01-01

    The ferroelectric perovskite type lanthanum doped lead titanate (PLT) ceramic powders were synthesized in one step with the starting materials of PbC 2 O 4 , La 2 O 3 and TiO 2 in NaCl-KCl molten salts in the temperature range of 700-950 deg. C. It was found that molten salt method was a large scale and easy preparation way to produce PLT powders with high dispersity. Tetragonal phase Pb 1-x La x TiO 3 ceramic powders were identified by XRD in the composition range 0 ≤ x ≤ 0.3 and mono-dispersed particles with spheric shape and less than 100 nm size were observed by SEM. The grain sizes of Pb 1-x La x TiO 3 ceramic powders increased with the increase of La content and decreased with calcination temperature. The grain growth progress and the possible reaction mechanism in molten salts and its influencing factors were discussed in this work. The grain growth process was the main influencing factor of the grain size, which depended on the solubility in the flux

  5. Distributed chemical computing using ChemStar: an open source java remote method invocation architecture applied to large scale molecular data from PubChem.

    Science.gov (United States)

    Karthikeyan, M; Krishnan, S; Pandey, Anil Kumar; Bender, Andreas; Tropsha, Alexander

    2008-04-01

    We present the application of a Java remote method invocation (RMI) based open source architecture to distributed chemical computing. This architecture was previously employed for distributed data harvesting of chemical information from the Internet via the Google application programming interface (API; ChemXtreme). Due to its open source character and its flexibility, the underlying server/client framework can be quickly adopted to virtually every computational task that can be parallelized. Here, we present the server/client communication framework as well as an application to distributed computing of chemical properties on a large scale (currently the size of PubChem; about 18 million compounds), using both the Marvin toolkit as well as the open source JOELib package. As an application, for this set of compounds, the agreement of log P and TPSA between the packages was compared. Outliers were found to be mostly non-druglike compounds and differences could usually be explained by differences in the underlying algorithms. ChemStar is the first open source distributed chemical computing environment built on Java RMI, which is also easily adaptable to user demands due to its "plug-in architecture". The complete source codes as well as calculated properties along with links to PubChem resources are available on the Internet via a graphical user interface at http://moltable.ncl.res.in/chemstar/.

  6. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  7. Configuration management in large scale infrastructure development

    NARCIS (Netherlands)

    Rijn, T.P.J. van; Belt, H. van de; Los, R.H.

    2000-01-01

    Large Scale Infrastructure (LSI) development projects such as the construction of roads, rail-ways and other civil engineering (water)works is tendered differently today than a decade ago. Traditional workflow requested quotes from construction companies for construction works where the works to be

  8. Large-scale Motion of Solar Filaments

    Indian Academy of Sciences (India)

    tribpo

    Large-scale Motion of Solar Filaments. Pavel Ambrož, Astronomical Institute of the Acad. Sci. of the Czech Republic, CZ-25165. Ondrejov, The Czech Republic. e-mail: pambroz@asu.cas.cz. Alfred Schroll, Kanzelhöehe Solar Observatory of the University of Graz, A-9521 Treffen,. Austria. e-mail: schroll@solobskh.ac.at.

  9. Sensitivity analysis for large-scale problems

    Science.gov (United States)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  10. Ethics of large-scale change

    DEFF Research Database (Denmark)

    Arler, Finn

    2006-01-01

    , which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, the neoclassical economists' approach, and finally the so-called Concentric Circle Theories approach...

  11. The origin of large scale cosmic structure

    International Nuclear Information System (INIS)

    Jones, B.J.T.; Palmer, P.L.

    1985-01-01

    The paper concerns the origin of large scale cosmic structure. The evolution of density perturbations, the nonlinear regime (Zel'dovich's solution and others), the Gott and Rees clustering hierarchy, the spectrum of condensations, and biassed galaxy formation, are all discussed. (UK)

  12. Learning from large scale neural simulations

    DEFF Research Database (Denmark)

    Serban, Maria

    2017-01-01

    Large-scale neural simulations have the marks of a distinct methodology which can be fruitfully deployed to advance scientific understanding of the human brain. Computer simulation studies can be used to produce surrogate observational data for better conceptual models and new how...

  13. Emerging large-scale solar heating applications

    International Nuclear Information System (INIS)

    Wong, W.P.; McClung, J.L.

    2009-01-01

    Currently the market for solar heating applications in Canada is dominated by outdoor swimming pool heating, make-up air pre-heating and domestic water heating in homes, commercial and institutional buildings. All of these involve relatively small systems, except for a few air pre-heating systems on very large buildings. Together these applications make up well over 90% of the solar thermal collectors installed in Canada during 2007. These three applications, along with the recent re-emergence of large-scale concentrated solar thermal for generating electricity, also dominate the world markets. This paper examines some emerging markets for large scale solar heating applications, with a focus on the Canadian climate and market. (author)

  14. Emerging large-scale solar heating applications

    Energy Technology Data Exchange (ETDEWEB)

    Wong, W.P.; McClung, J.L. [Science Applications International Corporation (SAIC Canada), Ottawa, Ontario (Canada)

    2009-07-01

    Currently the market for solar heating applications in Canada is dominated by outdoor swimming pool heating, make-up air pre-heating and domestic water heating in homes, commercial and institutional buildings. All of these involve relatively small systems, except for a few air pre-heating systems on very large buildings. Together these applications make up well over 90% of the solar thermal collectors installed in Canada during 2007. These three applications, along with the recent re-emergence of large-scale concentrated solar thermal for generating electricity, also dominate the world markets. This paper examines some emerging markets for large scale solar heating applications, with a focus on the Canadian climate and market. (author)

  15. A method for the assessment of the visual impact caused by the large-scale deployment of renewable-energy facilities

    International Nuclear Information System (INIS)

    Rodrigues, Marcos; Montanes, Carlos; Fueyo, Norberto

    2010-01-01

    The production of energy from renewable sources requires a significantly larger use of the territory compared with conventional (fossil and nuclear) sources. For large penetrations of renewable technologies, such as wind power, the overall visual impact at the national level can be substantial, and may prompt public reaction. This study develops a methodology for the assessment of the visual impact that can be used to measure and report the level of impact caused by several renewable technologies (wind farms, solar photovoltaic plants or solar thermal ones), both at the local and regional (e.g. national) scales. Applications are shown to several large-scale, hypothetical scenarios of wind and solar-energy penetration in Spain, and also to the vicinity of an actual, single wind farm.

  16. Mirror dark matter and large scale structure

    International Nuclear Information System (INIS)

    Ignatiev, A.Yu.; Volkas, R.R.

    2003-01-01

    Mirror matter is a dark matter candidate. In this paper, we reexamine the linear regime of density perturbation growth in a universe containing mirror dark matter. Taking adiabatic scale-invariant perturbations as the input, we confirm that the resulting processed power spectrum is richer than for the more familiar cases of cold, warm and hot dark matter. The new features include a maximum at a certain scale λ max , collisional damping below a smaller characteristic scale λ S ' , with oscillatory perturbations between the two. These scales are functions of the fundamental parameters of the theory. In particular, they decrease for decreasing x, the ratio of the mirror plasma temperature to that of the ordinary. For x∼0.2, the scale λ max becomes galactic. Mirror dark matter therefore leads to bottom-up large scale structure formation, similar to conventional cold dark matter, for x(less-or-similar sign)0.2. Indeed, the smaller the value of x, the closer mirror dark matter resembles standard cold dark matter during the linear regime. The differences pertain to scales smaller than λ S ' in the linear regime, and generally in the nonlinear regime because mirror dark matter is chemically complex and to some extent dissipative. Lyman-α forest data and the early reionization epoch established by WMAP may hold the key to distinguishing mirror dark matter from WIMP-style cold dark matter

  17. An Novel Architecture of Large-scale Communication in IOT

    Science.gov (United States)

    Ma, Wubin; Deng, Su; Huang, Hongbin

    2018-03-01

    In recent years, many scholars have done a great deal of research on the development of Internet of Things and networked physical systems. However, few people have made the detailed visualization of the large-scale communications architecture in the IOT. In fact, the non-uniform technology between IPv6 and access points has led to a lack of broad principles of large-scale communications architectures. Therefore, this paper presents the Uni-IPv6 Access and Information Exchange Method (UAIEM), a new architecture and algorithm that addresses large-scale communications in the IOT.

  18. A large-scale study of misophonia

    NARCIS (Netherlands)

    Rouw, R.; Erfanian, M.

    2018-01-01

    Objective We aim to elucidate misophonia, a condition in which particular sounds elicit disproportionally strong aversive reactions. Method A large online study extensively surveyed personal, developmental, and clinical characteristics of over 300 misophonics. Results Most participants indicated

  19. Optical interconnect for large-scale systems

    Science.gov (United States)

    Dress, William

    2013-02-01

    This paper presents a switchless, optical interconnect module that serves as a node in a network of identical distribution modules for large-scale systems. Thousands to millions of hosts or endpoints may be interconnected by a network of such modules, avoiding the need for multi-level switches. Several common network topologies are reviewed and their scaling properties assessed. The concept of message-flow routing is discussed in conjunction with the unique properties enabled by the optical distribution module where it is shown how top-down software control (global routing tables, spanning-tree algorithms) may be avoided.

  20. Puzzles of large scale structure and gravitation

    International Nuclear Information System (INIS)

    Sidharth, B.G.

    2006-01-01

    We consider the puzzle of cosmic voids bounded by two-dimensional structures of galactic clusters as also a puzzle pointed out by Weinberg: How can the mass of a typical elementary particle depend on a cosmic parameter like the Hubble constant? An answer to the first puzzle is proposed in terms of 'Scaled' Quantum Mechanical like behaviour which appears at large scales. The second puzzle can be answered by showing that the gravitational mass of an elementary particle has a Machian character (see Ahmed N. Cantorian small worked, Mach's principle and the universal mass network. Chaos, Solitons and Fractals 2004;21(4))

  1. Adaptive visualization for large-scale graph

    International Nuclear Information System (INIS)

    Nakamura, Hiroko; Shinano, Yuji; Ohzahata, Satoshi

    2010-01-01

    We propose an adoptive visualization technique for representing a large-scale hierarchical dataset within limited display space. A hierarchical dataset has nodes and links showing the parent-child relationship between the nodes. These nodes and links are described using graphics primitives. When the number of these primitives is large, it is difficult to recognize the structure of the hierarchical data because many primitives are overlapped within a limited region. To overcome this difficulty, we propose an adaptive visualization technique for hierarchical datasets. The proposed technique selects an appropriate graph style according to the nodal density in each area. (author)

  2. Highly Scalable Trip Grouping for Large Scale Collective Transportation Systems

    DEFF Research Database (Denmark)

    Gidofalvi, Gyozo; Pedersen, Torben Bach; Risch, Tore

    2008-01-01

    Transportation-related problems, like road congestion, parking, and pollution, are increasing in most cities. In order to reduce traffic, recent work has proposed methods for vehicle sharing, for example for sharing cabs by grouping "closeby" cab requests and thus minimizing transportation cost...... and utilizing cab space. However, the methods published so far do not scale to large data volumes, which is necessary to facilitate large-scale collective transportation systems, e.g., ride-sharing systems for large cities. This paper presents highly scalable trip grouping algorithms, which generalize previous...

  3. Challenges for Large Scale Structure Theory

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    I will describe some of the outstanding questions in Cosmology where answers could be provided by observations of the Large Scale Structure of the Universe at late times.I will discuss some of the theoretical challenges which will have to be overcome to extract this information from the observations. I will describe some of the theoretical tools that might be useful to achieve this goal. 

  4. Large scale inhomogeneities and the cosmological principle

    International Nuclear Information System (INIS)

    Lukacs, B.; Meszaros, A.

    1984-12-01

    The compatibility of cosmologic principles and possible large scale inhomogeneities of the Universe is discussed. It seems that the strongest symmetry principle which is still compatible with reasonable inhomogeneities, is a full conformal symmetry in the 3-space defined by the cosmological velocity field, but even in such a case, the standard model is isolated from the inhomogeneous ones when the whole evolution is considered. (author)

  5. Fires in large scale ventilation systems

    International Nuclear Information System (INIS)

    Gregory, W.S.; Martin, R.A.; White, B.W.; Nichols, B.D.; Smith, P.R.; Leslie, I.H.; Fenton, D.L.; Gunaji, M.V.; Blythe, J.P.

    1991-01-01

    This paper summarizes the experience gained simulating fires in large scale ventilation systems patterned after ventilation systems found in nuclear fuel cycle facilities. The series of experiments discussed included: (1) combustion aerosol loading of 0.61x0.61 m HEPA filters with the combustion products of two organic fuels, polystyrene and polymethylemethacrylate; (2) gas dynamic and heat transport through a large scale ventilation system consisting of a 0.61x0.61 m duct 90 m in length, with dampers, HEPA filters, blowers, etc.; (3) gas dynamic and simultaneous transport of heat and solid particulate (consisting of glass beads with a mean aerodynamic diameter of 10μ) through the large scale ventilation system; and (4) the transport of heat and soot, generated by kerosene pool fires, through the large scale ventilation system. The FIRAC computer code, designed to predict fire-induced transients in nuclear fuel cycle facility ventilation systems, was used to predict the results of experiments (2) through (4). In general, the results of the predictions were satisfactory. The code predictions for the gas dynamics, heat transport, and particulate transport and deposition were within 10% of the experimentally measured values. However, the code was less successful in predicting the amount of soot generation from kerosene pool fires, probably due to the fire module of the code being a one-dimensional zone model. The experiments revealed a complicated three-dimensional combustion pattern within the fire room of the ventilation system. Further refinement of the fire module within FIRAC is needed. (orig.)

  6. LAVA: Large scale Automated Vulnerability Addition

    Science.gov (United States)

    2016-05-23

    LAVA: Large-scale Automated Vulnerability Addition Brendan Dolan -Gavitt∗, Patrick Hulin†, Tim Leek†, Fredrich Ulrich†, Ryan Whelan† (Authors listed...released, and thus rapidly become stale. We can expect tools to have been trained to detect bugs that have been released. Given the commercial price tag...low TCN) and dead (low liveness) program data is a powerful one for vulnera- bility injection. The DUAs it identifies are internal program quantities

  7. Large-Scale Transit Signal Priority Implementation

    OpenAIRE

    Lee, Kevin S.; Lozner, Bailey

    2018-01-01

    In 2016, the District Department of Transportation (DDOT) deployed Transit Signal Priority (TSP) at 195 intersections in highly urbanized areas of Washington, DC. In collaboration with a broader regional implementation, and in partnership with the Washington Metropolitan Area Transit Authority (WMATA), DDOT set out to apply a systems engineering–driven process to identify, design, test, and accept a large-scale TSP system. This presentation will highlight project successes and lessons learned.

  8. Economically viable large-scale hydrogen liquefaction

    Science.gov (United States)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  9. Optimization method to branch-and-bound large SBO state spaces under dynamic probabilistic risk assessment via use of LENDIT scales and S2R2 sets

    International Nuclear Information System (INIS)

    Nielsen, Joseph; Tokuhiro, Akira; Khatry, Jivan; Hiromoto, Robert

    2014-01-01

    Traditional probabilistic risk assessment (PRA) methods have been developed to evaluate risk associated with complex systems; however, PRA methods lack the capability to evaluate complex dynamic systems. In these systems, time and energy scales associated with transient events may vary as a function of transition times and energies to arrive at a different physical state. Dynamic PRA (DPRA) methods provide a more rigorous analysis of complex dynamic systems. Unfortunately DPRA methods introduce issues associated with combinatorial explosion of states. In order to address this combinatorial complexity, a branch-and-bound optimization technique is applied to the DPRA formalism to control the combinatorial state explosion. In addition, a new characteristic scaling metric (LENDIT – length, energy, number, distribution, information and time) is proposed as linear constraints that are used to guide the branch-and-bound algorithm to limit the number of possible states to be analyzed. The LENDIT characterization is divided into four groups or sets – 'state, system, resource and response' (S2R2) – describing reactor operations (normal and off-normal). In this paper we introduce the branch-and-bound DPRA approach and the application of LENDIT scales and S2R2 sets to a station blackout (SBO) transient. (author)

  10. Image-based Exploration of Large-Scale Pathline Fields

    KAUST Repository

    Nagoor, Omniah H.

    2014-05-27

    While real-time applications are nowadays routinely used in visualizing large nu- merical simulations and volumes, handling these large-scale datasets requires high-end graphics clusters or supercomputers to process and visualize them. However, not all users have access to powerful clusters. Therefore, it is challenging to come up with a visualization approach that provides insight to large-scale datasets on a single com- puter. Explorable images (EI) is one of the methods that allows users to handle large data on a single workstation. Although it is a view-dependent method, it combines both exploration and modification of visual aspects without re-accessing the original huge data. In this thesis, we propose a novel image-based method that applies the concept of EI in visualizing large flow-field pathlines data. The goal of our work is to provide an optimized image-based method, which scales well with the dataset size. Our approach is based on constructing a per-pixel linked list data structure in which each pixel contains a list of pathlines segments. With this view-dependent method it is possible to filter, color-code and explore large-scale flow data in real-time. In addition, optimization techniques such as early-ray termination and deferred shading are applied, which further improves the performance and scalability of our approach.

  11. Analysis of the forced vibration test of the Hualien large scale soil-structure interaction model using a flexible volume substructuring method

    International Nuclear Information System (INIS)

    Tang, H.T.; Nakamura, N.

    1995-01-01

    A 1/4-scale cylindrical reactor containment model was constructed in Hualien, Taiwan for foil-structure interaction (SSI) effect evaluation and SSI analysis procedure verification. Forced vibration tests were executed before backfill (FVT-1) and after backfill (FVT-2) to characterize soil-structure system characteristics under low excitations. A number of organizations participated in the pre-test blind prediction and post-test correlation analyses of the forced vibration test using various industry familiar methods. In the current study, correlation analyses were performed using a three-dimensional flexible volume substructuring method. The results are reported and soil property sensitivities are evaluated in the paper. (J.P.N.)

  12. RESTRUCTURING OF THE LARGE-SCALE SPRINKLERS

    Directory of Open Access Journals (Sweden)

    Paweł Kozaczyk

    2016-09-01

    Full Text Available One of the best ways for agriculture to become independent from shortages of precipitation is irrigation. In the seventies and eighties of the last century a number of large-scale sprinklers in Wielkopolska was built. At the end of 1970’s in the Poznan province 67 sprinklers with a total area of 6400 ha were installed. The average size of the sprinkler reached 95 ha. In 1989 there were 98 sprinklers, and the area which was armed with them was more than 10 130 ha. The study was conducted on 7 large sprinklers with the area ranging from 230 to 520 hectares in 1986÷1998. After the introduction of the market economy in the early 90’s and ownership changes in agriculture, large-scale sprinklers have gone under a significant or total devastation. Land on the State Farms of the State Agricultural Property Agency has leased or sold and the new owners used the existing sprinklers to a very small extent. This involved a change in crop structure, demand structure and an increase in operating costs. There has also been a threefold increase in electricity prices. Operation of large-scale irrigation encountered all kinds of barriers in practice and limitations of system solutions, supply difficulties, high levels of equipment failure which is not inclined to rational use of available sprinklers. An effect of a vision of the local area was to show the current status of the remaining irrigation infrastructure. The adopted scheme for the restructuring of Polish agriculture was not the best solution, causing massive destruction of assets previously invested in the sprinkler system.

  13. The Cosmology Large Angular Scale Surveyor (CLASS)

    Science.gov (United States)

    Harrington, Kathleen; Marriange, Tobias; Aamir, Ali; Appel, John W.; Bennett, Charles L.; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; hide

    2016-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from in ation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145/217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  14. The Cosmology Large Angular Scale Surveyor

    Science.gov (United States)

    Harrington, Kathleen; Marriage, Tobias; Ali, Aamir; Appel, John; Bennett, Charles; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; hide

    2016-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from inflation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  15. Dipolar modulation of Large-Scale Structure

    Science.gov (United States)

    Yoon, Mijin

    For the last two decades, we have seen a drastic development of modern cosmology based on various observations such as the cosmic microwave background (CMB), type Ia supernovae, and baryonic acoustic oscillations (BAO). These observational evidences have led us to a great deal of consensus on the cosmological model so-called LambdaCDM and tight constraints on cosmological parameters consisting the model. On the other hand, the advancement in cosmology relies on the cosmological principle: the universe is isotropic and homogeneous on large scales. Testing these fundamental assumptions is crucial and will soon become possible given the planned observations ahead. Dipolar modulation is the largest angular anisotropy of the sky, which is quantified by its direction and amplitude. We measured a huge dipolar modulation in CMB, which mainly originated from our solar system's motion relative to CMB rest frame. However, we have not yet acquired consistent measurements of dipolar modulations in large-scale structure (LSS), as they require large sky coverage and a number of well-identified objects. In this thesis, we explore measurement of dipolar modulation in number counts of LSS objects as a test of statistical isotropy. This thesis is based on two papers that were published in peer-reviewed journals. In Chapter 2 [Yoon et al., 2014], we measured a dipolar modulation in number counts of WISE matched with 2MASS sources. In Chapter 3 [Yoon & Huterer, 2015], we investigated requirements for detection of kinematic dipole in future surveys.

  16. Status: Large-scale subatmospheric cryogenic systems

    International Nuclear Information System (INIS)

    Peterson, T.

    1989-01-01

    In the late 1960's and early 1970's an interest in testing and operating RF cavities at 1.8K motivated the development and construction of four large (300 Watt) 1.8K refrigeration systems. in the past decade, development of successful superconducting RF cavities and interest in obtaining higher magnetic fields with the improved Niobium-Titanium superconductors has once again created interest in large-scale 1.8K refrigeration systems. The L'Air Liquide plant for Tore Supra is a recently commissioned 300 Watt 1.8K system which incorporates new technology, cold compressors, to obtain the low vapor pressure for low temperature cooling. CEBAF proposes to use cold compressors to obtain 5KW at 2.0K. Magnetic refrigerators of 10 Watt capacity or higher at 1.8K are now being developed. The state of the art of large-scale refrigeration in the range under 4K will be reviewed. 28 refs., 4 figs., 7 tabs

  17. Phylogenetic distribution of large-scale genome patchiness

    Directory of Open Access Journals (Sweden)

    Hackenberg Michael

    2008-04-01

    Full Text Available Abstract Background The phylogenetic distribution of large-scale genome structure (i.e. mosaic compositional patchiness has been explored mainly by analytical ultracentrifugation of bulk DNA. However, with the availability of large, good-quality chromosome sequences, and the recently developed computational methods to directly analyze patchiness on the genome sequence, an evolutionary comparative analysis can be carried out at the sequence level. Results The local variations in the scaling exponent of the Detrended Fluctuation Analysis are used here to analyze large-scale genome structure and directly uncover the characteristic scales present in genome sequences. Furthermore, through shuffling experiments of selected genome regions, computationally-identified, isochore-like regions were identified as the biological source for the uncovered large-scale genome structure. The phylogenetic distribution of short- and large-scale patchiness was determined in the best-sequenced genome assemblies from eleven eukaryotic genomes: mammals (Homo sapiens, Pan troglodytes, Mus musculus, Rattus norvegicus, and Canis familiaris, birds (Gallus gallus, fishes (Danio rerio, invertebrates (Drosophila melanogaster and Caenorhabditis elegans, plants (Arabidopsis thaliana and yeasts (Saccharomyces cerevisiae. We found large-scale patchiness of genome structure, associated with in silico determined, isochore-like regions, throughout this wide phylogenetic range. Conclusion Large-scale genome structure is detected by directly analyzing DNA sequences in a wide range of eukaryotic chromosome sequences, from human to yeast. In all these genomes, large-scale patchiness can be associated with the isochore-like regions, as directly detected in silico at the sequence level.

  18. Large Scale Landform Mapping Using Lidar DEM

    Directory of Open Access Journals (Sweden)

    Türkay Gökgöz

    2015-08-01

    Full Text Available In this study, LIDAR DEM data was used to obtain a primary landform map in accordance with a well-known methodology. This primary landform map was generalized using the Focal Statistics tool (Majority, considering the minimum area condition in cartographic generalization in order to obtain landform maps at 1:1000 and 1:5000 scales. Both the primary and the generalized landform maps were verified visually with hillshaded DEM and an orthophoto. As a result, these maps provide satisfactory visuals of the landforms. In order to show the effect of generalization, the area of each landform in both the primary and the generalized maps was computed. Consequently, landform maps at large scales could be obtained with the proposed methodology, including generalization using LIDAR DEM.

  19. Large Scale Self-Organizing Information Distribution System

    National Research Council Canada - National Science Library

    Low, Steven

    2005-01-01

    This project investigates issues in "large-scale" networks. Here "large-scale" refers to networks with large number of high capacity nodes and transmission links, and shared by a large number of users...

  20. Optimization of Large-Scale Structural Systems

    DEFF Research Database (Denmark)

    Jensen, F. M.

    solutions to small problems with one or two variables to the optimization of large structures such as bridges, ships and offshore structures. The methods used for salving these problems have evolved from being classical differential calculus and calculus of variation to very advanced numerical techniques...

  1. Metastrategies in large-scale bargaining settings

    NARCIS (Netherlands)

    Hennes, D.; Jong, S. de; Tuyls, K.; Gal, Y.

    2015-01-01

    This article presents novel methods for representing and analyzing a special class of multiagent bargaining settings that feature multiple players, large action spaces, and a relationship among players' goals, tasks, and resources. We show how to reduce these interactions to a set of bilateral

  2. Local unitary transformation method for large-scale two-component relativistic calculations: case for a one-electron Dirac Hamiltonian.

    Science.gov (United States)

    Seino, Junji; Nakai, Hiromi

    2012-06-28

    An accurate and efficient scheme for two-component relativistic calculations at the spin-free infinite-order Douglas-Kroll-Hess (IODKH) level is presented. The present scheme, termed local unitary transformation (LUT), is based on the locality of the relativistic effect. Numerical assessments of the LUT scheme were performed in diatomic molecules such as HX and X(2) (X = F, Cl, Br, I, and At) and hydrogen halide clusters, (HX)(n) (X = F, Cl, Br, and I). Total energies obtained by the LUT method agree well with conventional IODKH results. The computational costs of the LUT method are drastically lower than those of conventional methods since in the former there is linear-scaling with respect to the system size and a small prefactor.

  3. Neutrinos and large-scale structure

    International Nuclear Information System (INIS)

    Eisenstein, Daniel J.

    2015-01-01

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos

  4. Neutrinos and large-scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Eisenstein, Daniel J. [Daniel J. Eisenstein, Harvard-Smithsonian Center for Astrophysics, 60 Garden St., MS #20, Cambridge, MA 02138 (United States)

    2015-07-15

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos.

  5. Trends in large-scale testing of reactor structures

    International Nuclear Information System (INIS)

    Blejwas, T.E.

    2003-01-01

    Large-scale tests of reactor structures have been conducted at Sandia National Laboratories since the late 1970s. This paper describes a number of different large-scale impact tests, pressurization tests of models of containment structures, and thermal-pressure tests of models of reactor pressure vessels. The advantages of large-scale testing are evident, but cost, in particular limits its use. As computer models have grown in size, such as number of degrees of freedom, the advent of computer graphics has made possible very realistic representation of results - results that may not accurately represent reality. A necessary condition to avoiding this pitfall is the validation of the analytical methods and underlying physical representations. Ironically, the immensely larger computer models sometimes increase the need for large-scale testing, because the modeling is applied to increasing more complex structural systems and/or more complex physical phenomena. Unfortunately, the cost of large-scale tests is a disadvantage that will likely severely limit similar testing in the future. International collaborations may provide the best mechanism for funding future programs with large-scale tests. (author)

  6. Internationalization Measures in Large Scale Research Projects

    Science.gov (United States)

    Soeding, Emanuel; Smith, Nancy

    2017-04-01

    Internationalization measures in Large Scale Research Projects Large scale research projects (LSRP) often serve as flagships used by universities or research institutions to demonstrate their performance and capability to stakeholders and other interested parties. As the global competition among universities for the recruitment of the brightest brains has increased, effective internationalization measures have become hot topics for universities and LSRP alike. Nevertheless, most projects and universities are challenged with little experience on how to conduct these measures and make internationalization an cost efficient and useful activity. Furthermore, those undertakings permanently have to be justified with the Project PIs as important, valuable tools to improve the capacity of the project and the research location. There are a variety of measures, suited to support universities in international recruitment. These include e.g. institutional partnerships, research marketing, a welcome culture, support for science mobility and an effective alumni strategy. These activities, although often conducted by different university entities, are interlocked and can be very powerful measures if interfaced in an effective way. On this poster we display a number of internationalization measures for various target groups, identify interfaces between project management, university administration, researchers and international partners to work together, exchange information and improve processes in order to be able to recruit, support and keep the brightest heads to your project.

  7. Large scale integration of photovoltaics in cities

    International Nuclear Information System (INIS)

    Strzalka, Aneta; Alam, Nazmul; Duminil, Eric; Coors, Volker; Eicker, Ursula

    2012-01-01

    Highlights: ► We implement the photovoltaics on a large scale. ► We use three-dimensional modelling for accurate photovoltaic simulations. ► We consider the shadowing effect in the photovoltaic simulation. ► We validate the simulated results using detailed hourly measured data. - Abstract: For a large scale implementation of photovoltaics (PV) in the urban environment, building integration is a major issue. This includes installations on roof or facade surfaces with orientations that are not ideal for maximum energy production. To evaluate the performance of PV systems in urban settings and compare it with the building user’s electricity consumption, three-dimensional geometry modelling was combined with photovoltaic system simulations. As an example, the modern residential district of Scharnhauser Park (SHP) near Stuttgart/Germany was used to calculate the potential of photovoltaic energy and to evaluate the local own consumption of the energy produced. For most buildings of the district only annual electrical consumption data was available and only selected buildings have electronic metering equipment. The available roof area for one of these multi-family case study buildings was used for a detailed hourly simulation of the PV power production, which was then compared to the hourly measured electricity consumption. The results were extrapolated to all buildings of the analyzed area by normalizing them to the annual consumption data. The PV systems can produce 35% of the quarter’s total electricity consumption and half of this generated electricity is directly used within the buildings.

  8. The Cosmology Large Angular Scale Surveyor (CLASS)

    Science.gov (United States)

    Cleary, Joseph

    2018-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is an array of four telescopes designed to measure the polarization of the Cosmic Microwave Background. CLASS aims to detect the B-mode polarization from primordial gravitational waves predicted by cosmic inflation theory, as well as the imprint left by reionization upon the CMB E-mode polarization. This will be achieved through a combination of observing strategy and state-of-the-art instrumentation. CLASS is observing 70% of the sky to characterize the CMB at large angular scales, which will measure the entire CMB power spectrum from the reionization peak to the recombination peak. The four telescopes operate at frequencies of 38, 93, 145, and 217 GHz, in order to estimate Galactic synchrotron and dust foregrounds while avoiding atmospheric absorption. CLASS employs rapid polarization modulation to overcome atmospheric and instrumental noise. Polarization sensitive cryogenic detectors with low noise levels provide CLASS the sensitivity required to constrain the tensor-to-scalar ratio down to levels of r ~ 0.01 while also measuring the optical depth the reionization to sample-variance levels. These improved constraints on the optical depth to reionization are required to pin down the mass of neutrinos from complementary cosmological data. CLASS has completed a year of observations at 38 GHz and is in the process of deploying the rest of the telescope array. This poster provides an overview and update on the CLASS science, hardware and survey operations.

  9. Large-scale Intelligent Transporation Systems simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.

    1995-06-01

    A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.

  10. Robust large-scale parallel nonlinear solvers for simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any

  11. Large-scale preparation of plasmid DNA.

    Science.gov (United States)

    Heilig, J S; Elbing, K L; Brent, R

    2001-05-01

    Although the need for large quantities of plasmid DNA has diminished as techniques for manipulating small quantities of DNA have improved, occasionally large amounts of high-quality plasmid DNA are desired. This unit describes the preparation of milligram quantities of highly purified plasmid DNA. The first part of the unit describes three methods for preparing crude lysates enriched in plasmid DNA from bacterial cells grown in liquid culture: alkaline lysis, boiling, and Triton lysis. The second part describes four methods for purifying plasmid DNA in such lysates away from contaminating RNA and protein: CsCl/ethidium bromide density gradient centrifugation, polyethylene glycol (PEG) precipitation, anion-exchange chromatography, and size-exclusion chromatography.

  12. Local unitary transformation method for large-scale two-component relativistic calculations. II. Extension to two-electron Coulomb interaction.

    Science.gov (United States)

    Seino, Junji; Nakai, Hiromi

    2012-10-14

    The local unitary transformation (LUT) scheme at the spin-free infinite-order Douglas-Kroll-Hess (IODKH) level [J. Seino and H. Nakai, J. Chem. Phys. 136, 244102 (2012)], which is based on the locality of relativistic effects, has been extended to a four-component Dirac-Coulomb Hamiltonian. In the previous study, the LUT scheme was applied only to a one-particle IODKH Hamiltonian with non-relativistic two-electron Coulomb interaction, termed IODKH/C. The current study extends the LUT scheme to a two-particle IODKH Hamiltonian as well as one-particle one, termed IODKH/IODKH, which has been a real bottleneck in numerical calculation. The LUT scheme with the IODKH/IODKH Hamiltonian was numerically assessed in the diatomic molecules HX and X(2) and hydrogen halide molecules, (HX)(n) (X = F, Cl, Br, and I). The total Hartree-Fock energies calculated by the LUT method agree well with conventional IODKH/IODKH results. The computational cost of the LUT method is reduced drastically compared with that of the conventional method. In addition, the LUT method achieves linear-scaling with respect to the system size and a small prefactor.

  13. VESPA: Very large-scale Evolutionary and Selective Pressure Analyses

    Directory of Open Access Journals (Sweden)

    Andrew E. Webb

    2017-06-01

    Full Text Available Background Large-scale molecular evolutionary analyses of protein coding sequences requires a number of preparatory inter-related steps from finding gene families, to generating alignments and phylogenetic trees and assessing selective pressure variation. Each phase of these analyses can represent significant challenges, particularly when working with entire proteomes (all protein coding sequences in a genome from a large number of species. Methods We present VESPA, software capable of automating a selective pressure analysis using codeML in addition to the preparatory analyses and summary statistics. VESPA is written in python and Perl and is designed to run within a UNIX environment. Results We have benchmarked VESPA and our results show that the method is consistent, performs well on both large scale and smaller scale datasets, and produces results in line with previously published datasets. Discussion Large-scale gene family identification, sequence alignment, and phylogeny reconstruction are all important aspects of large-scale molecular evolutionary analyses. VESPA provides flexible software for simplifying these processes along with downstream selective pressure variation analyses. The software automatically interprets results from codeML and produces simplified summary files to assist the user in better understanding the results. VESPA may be found at the following website: http://www.mol-evol.org/VESPA.

  14. Novel algorithm of large-scale simultaneous linear equations

    International Nuclear Information System (INIS)

    Fujiwara, T; Hoshi, T; Yamamoto, S; Sogabe, T; Zhang, S-L

    2010-01-01

    We review our recently developed methods of solving large-scale simultaneous linear equations and applications to electronic structure calculations both in one-electron theory and many-electron theory. This is the shifted COCG (conjugate orthogonal conjugate gradient) method based on the Krylov subspace, and the most important issue for applications is the shift equation and the seed switching method, which greatly reduce the computational cost. The applications to nano-scale Si crystals and the double orbital extended Hubbard model are presented.

  15. Radiations: large scale monitoring in Japan

    International Nuclear Information System (INIS)

    Linton, M.; Khalatbari, A.

    2011-01-01

    As the consequences of radioactive leaks on their health are a matter of concern for Japanese people, a large scale epidemiological study has been launched by the Fukushima medical university. It concerns the two millions inhabitants of the Fukushima Prefecture. On the national level and with the support of public funds, medical care and follow-up, as well as systematic controls are foreseen, notably to check the thyroid of 360.000 young people less than 18 year old and of 20.000 pregnant women in the Fukushima Prefecture. Some measurements have already been performed on young children. Despite the sometimes rather low measures, and because they know that some parts of the area are at least as much contaminated as it was the case around Chernobyl, some people are reluctant to go back home

  16. Large-scale digitizer system, analog converters

    International Nuclear Information System (INIS)

    Althaus, R.F.; Lee, K.L.; Kirsten, F.A.; Wagner, L.J.

    1976-10-01

    Analog to digital converter circuits that are based on the sharing of common resources, including those which are critical to the linearity and stability of the individual channels, are described. Simplicity of circuit composition is valued over other more costly approaches. These are intended to be applied in a large-scale processing and digitizing system for use with high-energy physics detectors such as drift-chambers or phototube-scintillator arrays. Signal distribution techniques are of paramount importance in maintaining adequate signal-to-noise ratio. Noise in both amplitude and time-jitter senses is held sufficiently low so that conversions with 10-bit charge resolution and 12-bit time resolution are achieved

  17. Engineering management of large scale systems

    Science.gov (United States)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  18. Large - scale Rectangular Ruler Automated Verification Device

    Science.gov (United States)

    Chen, Hao; Chang, Luping; Xing, Minjian; Xie, Xie

    2018-03-01

    This paper introduces a large-scale rectangular ruler automated verification device, which consists of photoelectric autocollimator and self-designed mechanical drive car and data automatic acquisition system. The design of mechanical structure part of the device refer to optical axis design, drive part, fixture device and wheel design. The design of control system of the device refer to hardware design and software design, and the hardware mainly uses singlechip system, and the software design is the process of the photoelectric autocollimator and the automatic data acquisition process. This devices can automated achieve vertical measurement data. The reliability of the device is verified by experimental comparison. The conclusion meets the requirement of the right angle test procedure.

  19. Testing Einstein's Gravity on Large Scales

    Science.gov (United States)

    Prescod-Weinstein, Chandra

    2011-01-01

    A little over a decade has passed since two teams studying high redshift Type Ia supernovae announced the discovery that the expansion of the universe was accelerating. After all this time, we?re still not sure how cosmic acceleration fits into the theory that tells us about the large-scale universe: General Relativity (GR). As part of our search for answers, we have been forced to question GR itself. But how will we test our ideas? We are fortunate enough to be entering the era of precision cosmology, where the standard model of gravity can be subjected to more rigorous testing. Various techniques will be employed over the next decade or two in the effort to better understand cosmic acceleration and the theory behind it. In this talk, I will describe cosmic acceleration, current proposals to explain it, and weak gravitational lensing, an observational effect that allows us to do the necessary precision cosmology.

  20. Large-Scale Astrophysical Visualization on Smartphones

    Science.gov (United States)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  1. Large-scale stochasticity in Hamiltonian systems

    International Nuclear Information System (INIS)

    Escande, D.F.

    1982-01-01

    Large scale stochasticity (L.S.S.) in Hamiltonian systems is defined on the paradigm Hamiltonian H(v,x,t) =v 2 /2-M cos x-P cos k(x-t) which describes the motion of one particle in two electrostatic waves. A renormalization transformation Tsub(r) is described which acts as a microscope that focusses on a given KAM (Kolmogorov-Arnold-Moser) torus in phase space. Though approximate, Tsub(r) yields the threshold of L.S.S. in H with an error of 5-10%. The universal behaviour of KAM tori is predicted: for instance the scale invariance of KAM tori and the critical exponent of the Lyapunov exponent of Cantori. The Fourier expansion of KAM tori is computed and several conjectures by L. Kadanoff and S. Shenker are proved. Chirikov's standard mapping for stochastic layers is derived in a simpler way and the width of the layers is computed. A simpler renormalization scheme for these layers is defined. A Mathieu equation for describing the stability of a discrete family of cycles is derived. When combined with Tsub(r), it allows to prove the link between KAM tori and nearby cycles, conjectured by J. Greene and, in particular, to compute the mean residue of a torus. The fractal diagrams defined by G. Schmidt are computed. A sketch of a methodology for computing the L.S.S. threshold in any two-degree-of-freedom Hamiltonian system is given. (Auth.)

  2. Large Scale Simulations of the Euler Equations on GPU Clusters

    KAUST Repository

    Liebmann, Manfred; Douglas, Craig C.; Haase, Gundolf; Horvá th, Zoltá n

    2010-01-01

    The paper investigates the scalability of a parallel Euler solver, using the Vijayasundaram method, on a GPU cluster with 32 Nvidia Geforce GTX 295 boards. The aim of this research is to enable large scale fluid dynamics simulations with up to one

  3. Image-based Exploration of Large-Scale Pathline Fields

    KAUST Repository

    Nagoor, Omniah H.

    2014-01-01

    structure in which each pixel contains a list of pathlines segments. With this view-dependent method it is possible to filter, color-code and explore large-scale flow data in real-time. In addition, optimization techniques such as early-ray termination

  4. Facile Large-Scale Synthesis of 5- and 6-Carboxyfluoresceins

    DEFF Research Database (Denmark)

    Hammershøj, Peter; Ek, Pramod Kumar; Harris, Pernille

    2015-01-01

    A series of fluorescein dyes have been prepared from a common precursor through a very simple synthetic procedure, giving access to important precursors for fluorescent probes. The method has proven an efficient access to regioisomerically pure 5- and 6-carboxyfluoresceins on a large scale, in good...

  5. Large-scale Homogenization of Bulk Materials in Mammoth Silos

    NARCIS (Netherlands)

    Schott, D.L.

    2004-01-01

    This doctoral thesis concerns the large-scale homogenization of bulk materials in mammoth silos. The objective of this research was to determine the best stacking and reclaiming method for homogenization in mammoth silos. For this purpose a simulation program was developed to estimate the

  6. Technologies and challenges in large-scale phosphoproteomics

    DEFF Research Database (Denmark)

    Engholm-Keller, Kasper; Larsen, Martin Røssel

    2013-01-01

    become the main technique for discovery and characterization of phosphoproteins in a nonhypothesis driven fashion. In this review, we describe methods for state-of-the-art MS-based analysis of protein phosphorylation as well as the strategies employed in large-scale phosphoproteomic experiments...... with focus on the various challenges and limitations this field currently faces....

  7. Multiple time scale methods in tokamak magnetohydrodynamics

    International Nuclear Information System (INIS)

    Jardin, S.C.

    1984-01-01

    Several methods are discussed for integrating the magnetohydrodynamic (MHD) equations in tokamak systems on other than the fastest time scale. The dynamical grid method for simulating ideal MHD instabilities utilizes a natural nonorthogonal time-dependent coordinate transformation based on the magnetic field lines. The coordinate transformation is chosen to be free of the fast time scale motion itself, and to yield a relatively simple scalar equation for the total pressure, P = p + B 2 /2μ 0 , which can be integrated implicitly to average over the fast time scale oscillations. Two methods are described for the resistive time scale. The zero-mass method uses a reduced set of two-fluid transport equations obtained by expanding in the inverse magnetic Reynolds number, and in the small ratio of perpendicular to parallel mobilities and thermal conductivities. The momentum equation becomes a constraint equation that forces the pressure and magnetic fields and currents to remain in force balance equilibrium as they evolve. The large mass method artificially scales up the ion mass and viscosity, thereby reducing the severe time scale disparity between wavelike and diffusionlike phenomena, but not changing the resistive time scale behavior. Other methods addressing the intermediate time scales are discussed

  8. CLASS: The Cosmology Large Angular Scale Surveyor

    Science.gov (United States)

    Essinger-Hileman, Thomas; Ali, Aamir; Amiri, Mandana; Appel, John W.; Araujo, Derek; Bennett, Charles L.; Boone, Fletcher; Chan, Manwei; Cho, Hsiao-Mei; Chuss, David T.; hide

    2014-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is an experiment to measure the signature of a gravitational wave background from inflation in the polarization of the cosmic microwave background (CMB). CLASS is a multi-frequency array of four telescopes operating from a high-altitude site in the Atacama Desert in Chile. CLASS will survey 70% of the sky in four frequency bands centered at 38, 93, 148, and 217 GHz, which are chosen to straddle the Galactic-foreground minimum while avoiding strong atmospheric emission lines. This broad frequency coverage ensures that CLASS can distinguish Galactic emission from the CMB. The sky fraction of the CLASS survey will allow the full shape of the primordial B-mode power spectrum to be characterized, including the signal from reionization at low-length. Its unique combination of large sky coverage, control of systematic errors, and high sensitivity will allow CLASS to measure or place upper limits on the tensor-to-scalar ratio at a level of r = 0:01 and make a cosmic-variance-limited measurement of the optical depth to the surface of last scattering, tau. (c) (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.

  9. Fatigue Analysis of Large-scale Wind turbine

    Directory of Open Access Journals (Sweden)

    Zhu Yongli

    2017-01-01

    Full Text Available The paper does research on top flange fatigue damage of large-scale wind turbine generator. It establishes finite element model of top flange connection system with finite element analysis software MSC. Marc/Mentat, analyzes its fatigue strain, implements load simulation of flange fatigue working condition with Bladed software, acquires flange fatigue load spectrum with rain-flow counting method, finally, it realizes fatigue analysis of top flange with fatigue analysis software MSC. Fatigue and Palmgren-Miner linear cumulative damage theory. The analysis result indicates that its result provides new thinking for flange fatigue analysis of large-scale wind turbine generator, and possesses some practical engineering value.

  10. Large-Scale Spacecraft Fire Safety Tests

    Science.gov (United States)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; hide

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests

  11. Seismic safety in conducting large-scale blasts

    Science.gov (United States)

    Mashukov, I. V.; Chaplygin, V. V.; Domanov, V. P.; Semin, A. A.; Klimkin, M. A.

    2017-09-01

    In mining enterprises to prepare hard rocks for excavation a drilling and blasting method is used. With the approach of mining operations to settlements the negative effect of large-scale blasts increases. To assess the level of seismic impact of large-scale blasts the scientific staff of Siberian State Industrial University carried out expertise for coal mines and iron ore enterprises. Determination of the magnitude of surface seismic vibrations caused by mass explosions was performed using seismic receivers, an analog-digital converter with recording on a laptop. The registration results of surface seismic vibrations during production of more than 280 large-scale blasts at 17 mining enterprises in 22 settlements are presented. The maximum velocity values of the Earth’s surface vibrations are determined. The safety evaluation of seismic effect was carried out according to the permissible value of vibration velocity. For cases with exceedance of permissible values recommendations were developed to reduce the level of seismic impact.

  12. Lattice Boltzmann methods applied to large-scale three-dimensional virtual cores constructed from digital optical borehole images of the karst carbonate Biscayne aquifer in southeastern Florida

    Science.gov (United States)

    Sukop, Michael C.; Cunningham, Kevin J.

    2014-11-01

    Digital optical borehole images at approximately 2 mm vertical resolution and borehole caliper data were used to create three-dimensional renderings of the distribution of (1) matrix porosity and (2) vuggy megaporosity for the karst carbonate Biscayne aquifer in southeastern Florida. The renderings based on the borehole data were used as input into Lattice Boltzmann methods to obtain intrinsic permeability estimates for this extremely transmissive aquifer, where traditional aquifer test methods may fail due to very small drawdowns and non-Darcian flow that can reduce apparent hydraulic conductivity. Variogram analysis of the borehole data suggests a nearly isotropic rock structure at lag lengths up to the nominal borehole diameter. A strong correlation between the diameter of the borehole and the presence of vuggy megaporosity in the data set led to a bias in the variogram where the computed horizontal spatial autocorrelation is strong at lag distances greater than the nominal borehole size. Lattice Boltzmann simulation of flow across a 0.4 × 0.4 × 17 m (2.72 m3 volume) parallel-walled column of rendered matrix and vuggy megaporosity indicates a high hydraulic conductivity of 53 m s-1. This value is similar to previous Lattice Boltzmann calculations of hydraulic conductivity in smaller limestone samples of the Biscayne aquifer. The development of simulation methods that reproduce dual-porosity systems with higher resolution and fidelity and that consider flow through horizontally longer renderings could provide improved estimates of the hydraulic conductivity and help to address questions about the importance of scale.

  13. Large Scale Processes and Extreme Floods in Brazil

    Science.gov (United States)

    Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.

    2016-12-01

    Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).

  14. Fast, large-scale hologram calculation in wavelet domain

    Science.gov (United States)

    Shimobaba, Tomoyoshi; Matsushima, Kyoji; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Ito, Tomoyoshi

    2018-04-01

    We propose a large-scale hologram calculation using WAvelet ShrinkAge-Based superpositIon (WASABI), a wavelet transform-based algorithm. An image-type hologram calculated using the WASABI method is printed on a glass substrate with the resolution of 65 , 536 × 65 , 536 pixels and a pixel pitch of 1 μm. The hologram calculation time amounts to approximately 354 s on a commercial CPU, which is approximately 30 times faster than conventional methods.

  15. Large-scale fuel cycle centres

    International Nuclear Information System (INIS)

    Smiley, S.H.; Black, K.M.

    1977-01-01

    The US Nuclear Regulatory Commission (NRC) has considered the nuclear energy centre concept for fuel cycle plants in the Nuclear Energy Centre Site Survey 1975 (NECSS-75) Rep. No. NUREG-0001, an important study mandated by the US Congress in the Energy Reorganization Act of 1974 which created the NRC. For this study, the NRC defined fuel cycle centres as consisting of fuel reprocessing and mixed-oxide fuel fabrication plants, and optional high-level waste and transuranic waste management facilities. A range of fuel cycle centre sizes corresponded to the fuel throughput of power plants with a total capacity of 50,000-300,000MW(e). The types of fuel cycle facilities located at the fuel cycle centre permit the assessment of the role of fuel cycle centres in enhancing the safeguard of strategic special nuclear materials - plutonium and mixed oxides. Siting fuel cycle centres presents a smaller problem than siting reactors. A single reprocessing plant of the scale projected for use in the USA (1500-2000t/a) can reprocess fuel from reactors producing 50,000-65,000MW(e). Only two or three fuel cycle centres of the upper limit size considered in the NECSS-75 would be required in the USA by the year 2000. The NECSS-75 fuel cycle centre evaluation showed that large-scale fuel cycle centres present no real technical siting difficulties from a radiological effluent and safety standpoint. Some construction economies may be achievable with fuel cycle centres, which offer opportunities to improve waste-management systems. Combined centres consisting of reactors and fuel reprocessing and mixed-oxide fuel fabrication plants were also studied in the NECSS. Such centres can eliminate shipment not only of Pu but also mixed-oxide fuel. Increased fuel cycle costs result from implementation of combined centres unless the fuel reprocessing plants are commercial-sized. Development of Pu-burning reactors could reduce any economic penalties of combined centres. The need for effective fissile

  16. Study of a large scale neutron measurement channel

    International Nuclear Information System (INIS)

    Amarouayache, Anissa; Ben Hadid, Hayet.

    1982-12-01

    A large scale measurement channel allows the processing of the signal coming from an unique neutronic sensor, during three different running modes: impulses, fluctuations and current. The study described in this note includes three parts: - A theoretical study of the large scale channel and its brief description are given. The results obtained till now in that domain are presented. - The fluctuation mode is thoroughly studied and the improvements to be done are defined. The study of a fluctuation linear channel with an automatic commutation of scales is described and the results of the tests are given. In this large scale channel, the method of data processing is analogical. - To become independent of the problems generated by the use of a an analogical processing of the fluctuation signal, a digital method of data processing is tested. The validity of that method is improved. The results obtained on a test system realized according to this method are given and a preliminary plan for further research is defined [fr

  17. Less is more: regularization perspectives on large scale machine learning

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Deep learning based techniques provide a possible solution at the expanse of theoretical guidance and, especially, of computational requirements. It is then a key challenge for large scale machine learning to devise approaches guaranteed to be accurate and yet computationally efficient. In this talk, we will consider a regularization perspectives on machine learning appealing to classical ideas in linear algebra and inverse problems to scale-up dramatically nonparametric methods such as kernel methods, often dismissed because of prohibitive costs. Our analysis derives optimal theoretical guarantees while providing experimental results at par or out-performing state of the art approaches.

  18. Dynameomics: Data-driven methods and models for utilizing large-scale protein structure repositories for improving fragment-based loop prediction

    Science.gov (United States)

    Rysavy, Steven J; Beck, David AC; Daggett, Valerie

    2014-01-01

    Protein function is intimately linked to protein structure and dynamics yet experimentally determined structures frequently omit regions within a protein due to indeterminate data, which is often due protein dynamics. We propose that atomistic molecular dynamics simulations provide a diverse sampling of biologically relevant structures for these missing segments (and beyond) to improve structural modeling and structure prediction. Here we make use of the Dynameomics data warehouse, which contains simulations of representatives of essentially all known protein folds. We developed novel computational methods to efficiently identify, rank and retrieve small peptide structures, or fragments, from this database. We also created a novel data model to analyze and compare large repositories of structural data, such as contained within the Protein Data Bank and the Dynameomics data warehouse. Our evaluation compares these structural repositories for improving loop predictions and analyzes the utility of our methods and models. Using a standard set of loop structures, containing 510 loops, 30 for each loop length from 4 to 20 residues, we find that the inclusion of Dynameomics structures in fragment-based methods improves the quality of the loop predictions without being dependent on sequence homology. Depending on loop length, ∼25–75% of the best predictions came from the Dynameomics set, resulting in lower main chain root-mean-square deviations for all fragment lengths using the combined fragment library. We also provide specific cases where Dynameomics fragments provide better predictions for NMR loop structures than fragments from crystal structures. Online access to these fragment libraries is available at http://www.dynameomics.org/fragments. PMID:25142412

  19. Dynameomics: data-driven methods and models for utilizing large-scale protein structure repositories for improving fragment-based loop prediction.

    Science.gov (United States)

    Rysavy, Steven J; Beck, David A C; Daggett, Valerie

    2014-11-01

    Protein function is intimately linked to protein structure and dynamics yet experimentally determined structures frequently omit regions within a protein due to indeterminate data, which is often due protein dynamics. We propose that atomistic molecular dynamics simulations provide a diverse sampling of biologically relevant structures for these missing segments (and beyond) to improve structural modeling and structure prediction. Here we make use of the Dynameomics data warehouse, which contains simulations of representatives of essentially all known protein folds. We developed novel computational methods to efficiently identify, rank and retrieve small peptide structures, or fragments, from this database. We also created a novel data model to analyze and compare large repositories of structural data, such as contained within the Protein Data Bank and the Dynameomics data warehouse. Our evaluation compares these structural repositories for improving loop predictions and analyzes the utility of our methods and models. Using a standard set of loop structures, containing 510 loops, 30 for each loop length from 4 to 20 residues, we find that the inclusion of Dynameomics structures in fragment-based methods improves the quality of the loop predictions without being dependent on sequence homology. Depending on loop length, ∼ 25-75% of the best predictions came from the Dynameomics set, resulting in lower main chain root-mean-square deviations for all fragment lengths using the combined fragment library. We also provide specific cases where Dynameomics fragments provide better predictions for NMR loop structures than fragments from crystal structures. Online access to these fragment libraries is available at http://www.dynameomics.org/fragments. © 2014 The Protein Society.

  20. Large scale molecular simulations of nanotoxicity.

    Science.gov (United States)

    Jimenez-Cruz, Camilo A; Kang, Seung-gu; Zhou, Ruhong

    2014-01-01

    The widespread use of nanomaterials in biomedical applications has been accompanied by an increasing interest in understanding their interactions with tissues, cells, and biomolecules, and in particular, on how they might affect the integrity of cell membranes and proteins. In this mini-review, we present a summary of some of the recent studies on this important subject, especially from the point of view of large scale molecular simulations. The carbon-based nanomaterials and noble metal nanoparticles are the main focus, with additional discussions on quantum dots and other nanoparticles as well. The driving forces for adsorption of fullerenes, carbon nanotubes, and graphene nanosheets onto proteins or cell membranes are found to be mainly hydrophobic interactions and the so-called π-π stacking (between aromatic rings), while for the noble metal nanoparticles the long-range electrostatic interactions play a bigger role. More interestingly, there are also growing evidences showing that nanotoxicity can have implications in de novo design of nanomedicine. For example, the endohedral metallofullerenol Gd@C₈₂(OH)₂₂ is shown to inhibit tumor growth and metastasis by inhibiting enzyme MMP-9, and graphene is illustrated to disrupt bacteria cell membranes by insertion/cutting as well as destructive extraction of lipid molecules. These recent findings have provided a better understanding of nanotoxicity at the molecular level and also suggested therapeutic potential by using the cytotoxicity of nanoparticles against cancer or bacteria cells. © 2014 Wiley Periodicals, Inc.

  1. Large-scale tides in general relativity

    Energy Technology Data Exchange (ETDEWEB)

    Ip, Hiu Yan; Schmidt, Fabian, E-mail: iphys@mpa-garching.mpg.de, E-mail: fabians@mpa-garching.mpg.de [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany)

    2017-02-01

    Density perturbations in cosmology, i.e. spherically symmetric adiabatic perturbations of a Friedmann-Lemaȋtre-Robertson-Walker (FLRW) spacetime, are locally exactly equivalent to a different FLRW solution, as long as their wavelength is much larger than the sound horizon of all fluid components. This fact is known as the 'separate universe' paradigm. However, no such relation is known for anisotropic adiabatic perturbations, which correspond to an FLRW spacetime with large-scale tidal fields. Here, we provide a closed, fully relativistic set of evolutionary equations for the nonlinear evolution of such modes, based on the conformal Fermi (CFC) frame. We show explicitly that the tidal effects are encoded by the Weyl tensor, and are hence entirely different from an anisotropic Bianchi I spacetime, where the anisotropy is sourced by the Ricci tensor. In order to close the system, certain higher derivative terms have to be dropped. We show that this approximation is equivalent to the local tidal approximation of Hui and Bertschinger [1]. We also show that this very simple set of equations matches the exact evolution of the density field at second order, but fails at third and higher order. This provides a useful, easy-to-use framework for computing the fully relativistic growth of structure at second order.

  2. Food appropriation through large scale land acquisitions

    International Nuclear Information System (INIS)

    Cristina Rulli, Maria; D’Odorico, Paolo

    2014-01-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300–550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190–370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations. (letter)

  3. Solving large scale structure in ten easy steps with COLA

    Energy Technology Data Exchange (ETDEWEB)

    Tassev, Svetlin [Department of Astrophysical Sciences, Princeton University, 4 Ivy Lane, Princeton, NJ 08544 (United States); Zaldarriaga, Matias [School of Natural Sciences, Institute for Advanced Study, Olden Lane, Princeton, NJ 08540 (United States); Eisenstein, Daniel J., E-mail: stassev@cfa.harvard.edu, E-mail: matiasz@ias.edu, E-mail: deisenstein@cfa.harvard.edu [Center for Astrophysics, Harvard University, 60 Garden Street, Cambridge, MA 02138 (United States)

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10{sup 9}M{sub s}un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10{sup 11}M{sub s}un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.

  4. Large-scale assembly of colloidal particles

    Science.gov (United States)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  5. Real-time simulation of large-scale floods

    Science.gov (United States)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  6. Updating Geospatial Data from Large Scale Data Sources

    Science.gov (United States)

    Zhao, R.; Chen, J.; Wang, D.; Shang, Y.; Wang, Z.; Li, X.; Ai, T.

    2011-08-01

    In the past decades, many geospatial databases have been established at national, regional and municipal levels over the world. Nowadays, it has been widely recognized that how to update these established geo-spatial database and keep them up to date is most critical for the value of geo-spatial database. So, more and more efforts have been devoted to the continuous updating of these geospatial databases. Currently, there exist two main types of methods for Geo-spatial database updating: directly updating with remote sensing images or field surveying materials, and indirectly updating with other updated data result such as larger scale newly updated data. The former method is the basis because the update data sources in the two methods finally root from field surveying and remote sensing. The later method is often more economical and faster than the former. Therefore, after the larger scale database is updated, the smaller scale database should be updated correspondingly in order to keep the consistency of multi-scale geo-spatial database. In this situation, it is very reasonable to apply map generalization technology into the process of geo-spatial database updating. The latter is recognized as one of most promising methods of geo-spatial database updating, especially in collaborative updating environment in terms of map scale, i.e , different scale database are produced and maintained separately by different level organizations such as in China. This paper is focused on applying digital map generalization into the updating of geo-spatial database from large scale in the collaborative updating environment for SDI. The requirements of the application of map generalization into spatial database updating are analyzed firstly. A brief review on geospatial data updating based digital map generalization is then given. Based on the requirements analysis and review, we analyze the key factors for implementing updating geospatial data from large scale including technical

  7. Parallel clustering algorithm for large-scale biological data sets.

    Science.gov (United States)

    Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang

    2014-01-01

    Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies.

  8. A method for apportionment of natural and anthropogenic contributions to heavy metal loadings in the surface soils across large-scale regions.

    Science.gov (United States)

    Hu, Yuanan; Cheng, Hefa

    2016-07-01

    Quantification of the contributions from anthropogenic sources to soil heavy metal loadings on regional scales is challenging because of the heterogeneity of soil parent materials and high variability of anthropogenic inputs, especially for the species that are primarily of lithogenic origin. To this end, we developed a novel method for apportioning the contributions of natural and anthropogenic sources by combining sequential extraction and stochastic modeling, and applied it to investigate the heavy metal pollution in the surface soils of the Pearl River Delta (PRD) in southern China. On the average, 45-86% of Zn, Cu, Pb, and Cd were present in the acid soluble, reducible, and oxidizable fractions of the surface soils, while only 12-24% of Ni, Cr, and As were partitioned in these fractions. The anthropogenic contributions to the heavy metals in the non-residual fractions, even the ones dominated by natural sources, could be identified and quantified by conditional inference trees. Combination of sequential extraction, Kriging interpolation, and stochastic modeling reveals that approximately 10, 39, 6.2, 28, 7.1, 15, and 46% of the As, Cd, Cr, Cu, Ni, Pb, and Zn, respectively, in the surface soils of the PRD were contributed by anthropogenic sources. These results were in general agreements with those obtained through subtraction of regional soil metal background from total loadings, and the soil metal inputs through atmospheric deposition as well. In the non-residual fractions of the surface soils, the anthropogenic contributions to As, Cd, Cr, Cu, Ni, Pb, and Zn, were 48, 42, 50, 51, 49, 24, and 70%, respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Large-scale fuel cycle centers

    International Nuclear Information System (INIS)

    Smiley, S.H.; Black, K.M.

    1977-01-01

    The United States Nuclear Regulatory Commission (NRC) has considered the nuclear energy center concept for fuel cycle plants in the Nuclear Energy Center Site Survey - 1975 (NECSS-75) -- an important study mandated by the U.S. Congress in the Energy Reorganization Act of 1974 which created the NRC. For the study, NRC defined fuel cycle centers to consist of fuel reprocessing and mixed oxide fuel fabrication plants, and optional high-level waste and transuranic waste management facilities. A range of fuel cycle center sizes corresponded to the fuel throughput of power plants with a total capacity of 50,000 - 300,000 MWe. The types of fuel cycle facilities located at the fuel cycle center permit the assessment of the role of fuel cycle centers in enhancing safeguarding of strategic special nuclear materials -- plutonium and mixed oxides. Siting of fuel cycle centers presents a considerably smaller problem than the siting of reactors. A single reprocessing plant of the scale projected for use in the United States (1500-2000 MT/yr) can reprocess the fuel from reactors producing 50,000-65,000 MWe. Only two or three fuel cycle centers of the upper limit size considered in the NECSS-75 would be required in the United States by the year 2000 . The NECSS-75 fuel cycle center evaluations showed that large scale fuel cycle centers present no real technical difficulties in siting from a radiological effluent and safety standpoint. Some construction economies may be attainable with fuel cycle centers; such centers offer opportunities for improved waste management systems. Combined centers consisting of reactors and fuel reprocessing and mixed oxide fuel fabrication plants were also studied in the NECSS. Such centers can eliminate not only shipment of plutonium, but also mixed oxide fuel. Increased fuel cycle costs result from implementation of combined centers unless the fuel reprocessing plants are commercial-sized. Development of plutonium-burning reactors could reduce any

  10. Distributed large-scale dimensional metrology new insights

    CERN Document Server

    Franceschini, Fiorenzo; Maisano, Domenico

    2011-01-01

    Focuses on the latest insights into and challenges of distributed large scale dimensional metrology Enables practitioners to study distributed large scale dimensional metrology independently Includes specific examples of the development of new system prototypes

  11. Large-Scale Pattern Discovery in Music

    Science.gov (United States)

    Bertin-Mahieux, Thierry

    This work focuses on extracting patterns in musical data from very large collections. The problem is split in two parts. First, we build such a large collection, the Million Song Dataset, to provide researchers access to commercial-size datasets. Second, we use this collection to study cover song recognition which involves finding harmonic patterns from audio features. Regarding the Million Song Dataset, we detail how we built the original collection from an online API, and how we encouraged other organizations to participate in the project. The result is the largest research dataset with heterogeneous sources of data available to music technology researchers. We demonstrate some of its potential and discuss the impact it already has on the field. On cover song recognition, we must revisit the existing literature since there are no publicly available results on a dataset of more than a few thousand entries. We present two solutions to tackle the problem, one using a hashing method, and one using a higher-level feature computed from the chromagram (dubbed the 2DFTM). We further investigate the 2DFTM since it has potential to be a relevant representation for any task involving audio harmonic content. Finally, we discuss the future of the dataset and the hope of seeing more work making use of the different sources of data that are linked in the Million Song Dataset. Regarding cover songs, we explain how this might be a first step towards defining a harmonic manifold of music, a space where harmonic similarities between songs would be more apparent.

  12. Probes of large-scale structure in the Universe

    International Nuclear Information System (INIS)

    Suto, Yasushi; Gorski, K.; Juszkiewicz, R.; Silk, J.

    1988-01-01

    Recent progress in observational techniques has made it possible to confront quantitatively various models for the large-scale structure of the Universe with detailed observational data. We develop a general formalism to show that the gravitational instability theory for the origin of large-scale structure is now capable of critically confronting observational results on cosmic microwave background radiation angular anisotropies, large-scale bulk motions and large-scale clumpiness in the galaxy counts. (author)

  13. Segmentation by Large Scale Hypothesis Testing - Segmentation as Outlier Detection

    DEFF Research Database (Denmark)

    Darkner, Sune; Dahl, Anders Lindbjerg; Larsen, Rasmus

    2010-01-01

    a microscope and we show how the method can handle transparent particles with significant glare point. The method generalizes to other problems. THis is illustrated by applying the method to camera calibration images and MRI of the midsagittal plane for gray and white matter separation and segmentation......We propose a novel and efficient way of performing local image segmentation. For many applications a threshold of pixel intensities is sufficient but determine the appropriate threshold value can be difficult. In cases with large global intensity variation the threshold value has to be adapted...... locally. We propose a method based on large scale hypothesis testing with a consistent method for selecting an appropriate threshold for the given data. By estimating the background distribution we characterize the segment of interest as a set of outliers with a certain probability based on the estimated...

  14. Homogenization of Large-Scale Movement Models in Ecology

    Science.gov (United States)

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  15. Large-scale energy consumers pay less

    International Nuclear Information System (INIS)

    Denneman, A.

    2012-01-01

    The price of electricity in the Netherlands rose with 6 percent in the first quarter of 2012, whereas large business consumers are paying less. The natural gas price has risen with about 10 percent in the last year, both for households and for large business consumers. Meanwhile, households are paying twice as much for electricity and gas as large business consumers. [nl

  16. Multiresolution comparison of precipitation datasets for large-scale models

    Science.gov (United States)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  17. Large-Scale Optimization for Bayesian Inference in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Willcox, Karen [MIT; Marzouk, Youssef [MIT

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to

  18. Large scale dynamics of protoplanetary discs

    Science.gov (United States)

    Béthune, William

    2017-08-01

    Planets form in the gaseous and dusty disks orbiting young stars. These protoplanetary disks are dispersed in a few million years, being accreted onto the central star or evaporated into the interstellar medium. To explain the observed accretion rates, it is commonly assumed that matter is transported through the disk by turbulence, although the mechanism sustaining turbulence is uncertain. On the other side, irradiation by the central star could heat up the disk surface and trigger a photoevaporative wind, but thermal effects cannot account for the observed acceleration and collimation of the wind into a narrow jet perpendicular to the disk plane. Both issues can be solved if the disk is sensitive to magnetic fields. Weak fields lead to the magnetorotational instability, whose outcome is a state of sustained turbulence. Strong fields can slow down the disk, causing it to accrete while launching a collimated wind. However, the coupling between the disk and the neutral gas is done via electric charges, each of which is outnumbered by several billion neutral molecules. The imperfect coupling between the magnetic field and the neutral gas is described in terms of "non-ideal" effects, introducing new dynamical behaviors. This thesis is devoted to the transport processes happening inside weakly ionized and weakly magnetized accretion disks; the role of microphysical effects on the large-scale dynamics of the disk is of primary importance. As a first step, I exclude the wind and examine the impact of non-ideal effects on the turbulent properties near the disk midplane. I show that the flow can spontaneously organize itself if the ionization fraction is low enough; in this case, accretion is halted and the disk exhibits axisymmetric structures, with possible consequences on planetary formation. As a second step, I study the launching of disk winds via a global model of stratified disk embedded in a warm atmosphere. This model is the first to compute non-ideal effects from

  19. A Classification Framework for Large-Scale Face Recognition Systems

    OpenAIRE

    Zhou, Ziheng; Deravi, Farzin

    2009-01-01

    This paper presents a generic classification framework for large-scale face recognition systems. Within the framework, a data sampling strategy is proposed to tackle the data imbalance when image pairs are sampled from thousands of face images for preparing a training dataset. A modified kernel Fisher discriminant classifier is proposed to make it computationally feasible to train the kernel-based classification method using tens of thousands of training samples. The framework is tested in an...

  20. Large scale injection test (LASGIT) modelling

    International Nuclear Information System (INIS)

    Arnedo, D.; Olivella, S.; Alonso, E.E.

    2010-01-01

    Document available in extended abstract form only. With the objective of understanding the gas flow processes through clay barriers in schemes of radioactive waste disposal, the Lasgit in situ experiment was planned and is currently in progress. The modelling of the experiment will permit to better understand of the responses, to confirm hypothesis of mechanisms and processes and to learn in order to design future experiments. The experiment and modelling activities are included in the project FORGE (FP7). The in situ large scale injection test Lasgit is currently being performed at the Aespoe Hard Rock Laboratory by SKB and BGS. An schematic layout of the test is shown. The deposition hole follows the KBS3 scheme. A copper canister is installed in the axe of the deposition hole, surrounded by blocks of highly compacted MX-80 bentonite. A concrete plug is placed at the top of the buffer. A metallic lid anchored to the surrounding host rock is included in order to prevent vertical movements of the whole system during gas injection stages (high gas injection pressures are expected to be reached). Hydration of the buffer material is achieved by injecting water through filter mats, two placed at the rock walls and two at the interfaces between bentonite blocks. Water is also injected through the 12 canister filters. Gas injection stages are performed injecting gas to some of the canister injection filters. Since the water pressure and the stresses (swelling pressure development) will be high during gas injection, it is necessary to inject at high gas pressures. This implies mechanical couplings as gas penetrates after the gas entry pressure is achieved and may produce deformations which in turn lead to permeability increments. A 3D hydro-mechanical numerical model of the test using CODE-BRIGHT is presented. The domain considered for the modelling is shown. The materials considered in the simulation are the MX-80 bentonite blocks (cylinders and rings), the concrete plug

  1. Large scale structure from viscous dark matter

    CERN Document Server

    Blas, Diego; Garny, Mathias; Tetradis, Nikolaos; Wiedemann, Urs Achim

    2015-01-01

    Cosmological perturbations of sufficiently long wavelength admit a fluid dynamic description. We consider modes with wavevectors below a scale $k_m$ for which the dynamics is only mildly non-linear. The leading effect of modes above that scale can be accounted for by effective non-equilibrium viscosity and pressure terms. For mildly non-linear scales, these mainly arise from momentum transport within the ideal and cold but inhomogeneous fluid, while momentum transport due to more microscopic degrees of freedom is suppressed. As a consequence, concrete expressions with no free parameters, except the matching scale $k_m$, can be derived from matching evolution equations to standard cosmological perturbation theory. Two-loop calculations of the matter power spectrum in the viscous theory lead to excellent agreement with $N$-body simulations up to scales $k=0.2 \\, h/$Mpc. The convergence properties in the ultraviolet are better than for standard perturbation theory and the results are robust with respect to varia...

  2. Geospatial Optimization of Siting Large-Scale Solar Projects

    Energy Technology Data Exchange (ETDEWEB)

    Macknick, Jordan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Quinby, Ted [National Renewable Energy Lab. (NREL), Golden, CO (United States); Caulfield, Emmet [Stanford Univ., CA (United States); Gerritsen, Margot [Stanford Univ., CA (United States); Diffendorfer, Jay [U.S. Geological Survey, Boulder, CO (United States); Haines, Seth [U.S. Geological Survey, Boulder, CO (United States)

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  3. SCALE INTERACTION IN A MIXING LAYER. THE ROLE OF THE LARGE-SCALE GRADIENTS

    KAUST Repository

    Fiscaletti, Daniele

    2015-08-23

    The interaction between scales is investigated in a turbulent mixing layer. The large-scale amplitude modulation of the small scales already observed in other works depends on the crosswise location. Large-scale positive fluctuations correlate with a stronger activity of the small scales on the low speed-side of the mixing layer, and a reduced activity on the high speed-side. However, from physical considerations we would expect the scales to interact in a qualitatively similar way within the flow and across different turbulent flows. Therefore, instead of the large-scale fluctuations, the large-scale gradients modulation of the small scales has been additionally investigated.

  4. Accelerating large-scale phase-field simulations with GPU

    Directory of Open Access Journals (Sweden)

    Xiaoming Shi

    2017-10-01

    Full Text Available A new package for accelerating large-scale phase-field simulations was developed by using GPU based on the semi-implicit Fourier method. The package can solve a variety of equilibrium equations with different inhomogeneity including long-range elastic, magnetostatic, and electrostatic interactions. Through using specific algorithm in Compute Unified Device Architecture (CUDA, Fourier spectral iterative perturbation method was integrated in GPU package. The Allen-Cahn equation, Cahn-Hilliard equation, and phase-field model with long-range interaction were solved based on the algorithm running on GPU respectively to test the performance of the package. From the comparison of the calculation results between the solver executed in single CPU and the one on GPU, it was found that the speed on GPU is enormously elevated to 50 times faster. The present study therefore contributes to the acceleration of large-scale phase-field simulations and provides guidance for experiments to design large-scale functional devices.

  5. Inflation, large scale structure and particle physics

    Indian Academy of Sciences (India)

    Logo of the Indian Academy of Sciences ... Hybrid inflation; Higgs scalar field; structure formation; curvation. ... We then discuss a particle physics model of supersymmetric hybrid inflation at the intermediate scale in which ... May 2018. Home · Volumes & Issues · Special Issues · Forthcoming Articles · Search · Editorial Board ...

  6. Application of GRA method, dynamic analysis and fuzzy set theory in evaluation and selection of emergency treatment technology for large scale phenol spill incidents

    Science.gov (United States)

    Zhao, Jingjing; Yu, Lean; Li, Lian

    2017-05-01

    Select an appropriate technology in an emergency response is a very important issue with various kinds of chemical contingency spills frequently taking place. Due to the complexity, fuzziness and uncertainties of the chemical contingency spills, the theory of GRA method, dynamic analysis combined with fuzzy set theory will be appropriately applied to selection and evaluation of emergency treatment technology. Finally, a emergency phenol spill accidence occurred in highway is provided to illustrate the applicability and feasibility of the proposed methods.

  7. Doppler method leak detection for LMFBR steam generators. Pt. 2. Detection characteristics of bubble in-water using large scale SG model

    International Nuclear Information System (INIS)

    Kumagai, Hiromichi

    2000-01-01

    To prevent the expansion of tube damage and to maintain structural integrity in the steam generators (SGs) of a fast breeder reactor (FBR), it is necessary to detect precisely and immediately the leakage of water from heat transfer tubes. Therefore, an active acoustic method was developed. Previous studies have revealed that, in practical steam generators, the active acoustic method can detect bubbles of 10 l/s within 10 seconds. However to prevent the expansion of damage to neighboring tubes, it is necessary to detect smaller leakages of water from the heat transfer tubes. The Doppler method is designed to detect small leakages and to find the source of a leak before damage spreads to neighboring tubes. The detection sensitivity of the Doppler method and the influence of background noise were investigated experimentally. In-water experiments were performed using an SG full-sector model that simulates actual SGs. The results show that the Doppler method can detect bubbles of 0.1 l/s (equivalent to a water leak rate of about 0.1 g/s) within a few seconds and that the background noise has little effect on water leak detection performance. The Doppler method thus has great potential for the detection of water leakage in SGs. (author)

  8. Large-scale innovation and change in UK higher education

    Directory of Open Access Journals (Sweden)

    Stephen Brown

    2013-09-01

    Full Text Available This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ technology to deliver such changes. Key lessons that emerged from these experiences are reviewed covering themes of pervasiveness, unofficial systems, project creep, opposition, pressure to deliver, personnel changes and technology issues. The paper argues that collaborative approaches to project management offer greater prospects of effective large-scale change in universities than either management-driven top-down or more champion-led bottom-up methods. It also argues that while some diminution of control over project outcomes is inherent in this approach, this is outweighed by potential benefits of lasting and widespread adoption of agreed changes.

  9. Large Scale GW Calculations on the Cori System

    Science.gov (United States)

    Deslippe, Jack; Del Ben, Mauro; da Jornada, Felipe; Canning, Andrew; Louie, Steven

    The NERSC Cori system, powered by 9000+ Intel Xeon-Phi processors, represents one of the largest HPC systems for open-science in the United States and the world. We discuss the optimization of the GW methodology for this system, including both node level and system-scale optimizations. We highlight multiple large scale (thousands of atoms) case studies and discuss both absolute application performance and comparison to calculations on more traditional HPC architectures. We find that the GW method is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism across many layers of the system. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, as part of the Computational Materials Sciences Program.

  10. Towards large-scale plasma-assisted synthesis of nanowires

    Science.gov (United States)

    Cvelbar, U.

    2011-05-01

    Large quantities of nanomaterials, e.g. nanowires (NWs), are needed to overcome the high market price of nanomaterials and make nanotechnology widely available for general public use and applications to numerous devices. Therefore, there is an enormous need for new methods or routes for synthesis of those nanostructures. Here plasma technologies for synthesis of NWs, nanotubes, nanoparticles or other nanostructures might play a key role in the near future. This paper presents a three-dimensional problem of large-scale synthesis connected with the time, quantity and quality of nanostructures. Herein, four different plasma methods for NW synthesis are presented in contrast to other methods, e.g. thermal processes, chemical vapour deposition or wet chemical processes. The pros and cons are discussed in detail for the case of two metal oxides: iron oxide and zinc oxide NWs, which are important for many applications.

  11. Engineering large-scale agent-based systems with consensus

    Science.gov (United States)

    Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.

    1994-01-01

    The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.

  12. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    The present paper presents overtopping measurements from small scale model test performed at the Haudraulic & Coastal Engineering Laboratory, Aalborg University, Denmark and large scale model tests performed at the Largde Wave Channel,Hannover, Germany. Comparison between results obtained from...... small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...

  13. Thermal power generation projects ``Large Scale Solar Heating``; EU-Thermie-Projekte ``Large Scale Solar Heating``

    Energy Technology Data Exchange (ETDEWEB)

    Kuebler, R.; Fisch, M.N. [Steinbeis-Transferzentrum Energie-, Gebaeude- und Solartechnik, Stuttgart (Germany)

    1998-12-31

    The aim of this project is the preparation of the ``Large-Scale Solar Heating`` programme for an Europe-wide development of subject technology. The following demonstration programme was judged well by the experts but was not immediately (1996) accepted for financial subsidies. In November 1997 the EU-commission provided 1,5 million ECU which allowed the realisation of an updated project proposal. By mid 1997 a small project was approved, that had been requested under the lead of Chalmes Industriteteknik (CIT) in Sweden and is mainly carried out for the transfer of technology. (orig.) [Deutsch] Ziel dieses Vorhabens ist die Vorbereitung eines Schwerpunktprogramms `Large Scale Solar Heating`, mit dem die Technologie europaweit weiterentwickelt werden sollte. Das daraus entwickelte Demonstrationsprogramm wurde von den Gutachtern positiv bewertet, konnte jedoch nicht auf Anhieb (1996) in die Foerderung aufgenommen werden. Im November 1997 wurden von der EU-Kommission dann kurzfristig noch 1,5 Mio ECU an Foerderung bewilligt, mit denen ein aktualisierter Projektvorschlag realisiert werden kann. Bereits Mitte 1997 wurde ein kleineres Vorhaben bewilligt, das unter Federfuehrung von Chalmers Industriteknik (CIT) in Schweden beantragt worden war und das vor allem dem Technologietransfer dient. (orig.)

  14. Automatic management software for large-scale cluster system

    International Nuclear Information System (INIS)

    Weng Yunjian; Chinese Academy of Sciences, Beijing; Sun Gongxing

    2007-01-01

    At present, the large-scale cluster system faces to the difficult management. For example the manager has large work load. It needs to cost much time on the management and the maintenance of large-scale cluster system. The nodes in large-scale cluster system are very easy to be chaotic. Thousands of nodes are put in big rooms so that some managers are very easy to make the confusion with machines. How do effectively carry on accurate management under the large-scale cluster system? The article introduces ELFms in the large-scale cluster system. Furthermore, it is proposed to realize the large-scale cluster system automatic management. (authors)

  15. Large scale processing of dielectric electroactive polymers

    DEFF Research Database (Denmark)

    Vudayagiri, Sindhu

    Efficient processing techniques are vital to the success of any manufacturing industry. The processing techniques determine the quality of the products and thus to a large extent the performance and reliability of the products that are manufactured. The dielectric electroactive polymer (DEAP...

  16. Linking Large-Scale Reading Assessments: Comment

    Science.gov (United States)

    Hanushek, Eric A.

    2016-01-01

    E. A. Hanushek points out in this commentary that applied researchers in education have only recently begun to appreciate the value of international assessments, even though there are now 50 years of experience with these. Until recently, these assessments have been stand-alone surveys that have not been linked, and analysis has largely focused on…

  17. Large Scale Density Estimation of Blue and Fin Whales: Utilizing Sparse Array Data to Develop and Implement a New Method for Estimating Blue and Fin Whale Density

    Science.gov (United States)

    2015-09-30

    titled “Ocean Basin Impact of Ambient Noise on Marine Mammal Detectability, Distribution, and Acoustic Communication ”. Patterns and trends of ocean... mammals in response to potentially negative interactions with human activity requires knowledge of how many animals are present in an area during a...specific time period. Many marine mammal species are relatively hard to sight, making standard visual methods of density estimation difficult and

  18. The possible ways for soil complex purification from radionuclides in conditions of large-scale contamination and effectiveness of used methods

    International Nuclear Information System (INIS)

    Bondar, Y.; Konoplia, E.

    1999-01-01

    There has been a considerable aggregate effect of natural factors on soils contaminated with radioactive pollution by autopurification. Such factors such as natural decay, vertical migration of nuclides over the soil profile, as well as cyclic carrying of nuclides from the soil by vegetation, have been analyzed. The contaminated Belarus Polessie soils, as the result of the Chernobyl Catastrophe, have shown that during the past 13 years, a 1.5-1.7 fold decrease of long living radionuclides has taken place in the rooting layer. The qualitative characteristics of the soil purification process by phytocoenosis have been established, and the effectiveness and limitations of this method have been demonstrated. The effect of microbiological soil processes on the radionuclides mobility have been studied and the issues of the migration process intensification by means of optimal nutrient media have been considered. Hydroseparation of highly dispersed soil particles with simultaneous consideration of the soil organic substance contents allows attainment of a purification coefficient of 1.5-2. Further increase of C pur leads to irreversible humus substance loss and depriving the soil of its fertility, in addition the quantity of solid wastes dramatically increases that should be localized. A soil cut has been carried out on an experimental plot. It has been shown that the effectiveness of this method is high in comparison with other appropriate methods. However, with time, the purification rate decreases due to the radionuclides exceeding the bounds of the cutting layer caused by migration. (author)

  19. A Method to Search for Correlations of Ultra-high Energy Cosmic-Ray Masses with the Large-scale Structures in the Local Galaxy Density Field

    Science.gov (United States)

    Ivanov, A. A.

    2013-02-01

    One of the main goals of investigations using present and future giant extensive air shower (EAS) arrays is the mass composition of ultra-high energy cosmic rays (UHECRs). A new approach to the problem is presented, combining the analysis of arrival directions with the statistical test of the paired EAS samples. One of the ideas of the method is to search for possible correlations between UHECR masses and their separate sources; for instance, if there are two sources in different areas of the celestial sphere injecting different nuclei, but the fluxes are comparable so that arrival directions are isotropic, then the aim is to reveal a difference in the mass composition of cosmic-ray fluxes. The method is based on a non-parametric statistical test—the Wilcoxon signed-rank routine—which does not depend on the populations fitting any parameterized distributions. Two particular algorithms are proposed: first, using measurements of the depth of the EAS maximum position in the atmosphere; and second, relying on the age variance of air showers initiated by different primary particles. The formulated method is applied to the Yakutsk array data, in order to demonstrate the possibility of searching for a difference in average mass composition between the two UHECR sets, arriving particularly from the supergalactic plane and a complementary region.

  20. A METHOD TO SEARCH FOR CORRELATIONS OF ULTRA-HIGH ENERGY COSMIC-RAY MASSES WITH THE LARGE-SCALE STRUCTURES IN THE LOCAL GALAXY DENSITY FIELD

    International Nuclear Information System (INIS)

    Ivanov, A. A.

    2013-01-01

    One of the main goals of investigations using present and future giant extensive air shower (EAS) arrays is the mass composition of ultra-high energy cosmic rays (UHECRs). A new approach to the problem is presented, combining the analysis of arrival directions with the statistical test of the paired EAS samples. One of the ideas of the method is to search for possible correlations between UHECR masses and their separate sources; for instance, if there are two sources in different areas of the celestial sphere injecting different nuclei, but the fluxes are comparable so that arrival directions are isotropic, then the aim is to reveal a difference in the mass composition of cosmic-ray fluxes. The method is based on a non-parametric statistical test—the Wilcoxon signed-rank routine—which does not depend on the populations fitting any parameterized distributions. Two particular algorithms are proposed: first, using measurements of the depth of the EAS maximum position in the atmosphere; and second, relying on the age variance of air showers initiated by different primary particles. The formulated method is applied to the Yakutsk array data, in order to demonstrate the possibility of searching for a difference in average mass composition between the two UHECR sets, arriving particularly from the supergalactic plane and a complementary region.

  1. A METHOD TO SEARCH FOR CORRELATIONS OF ULTRA-HIGH ENERGY COSMIC-RAY MASSES WITH THE LARGE-SCALE STRUCTURES IN THE LOCAL GALAXY DENSITY FIELD

    Energy Technology Data Exchange (ETDEWEB)

    Ivanov, A. A., E-mail: ivanov@ikfia.ysn.ru [Shafer Institute for Cosmophysical Research and Aeronomy, 31 Lenin Avenue, Yakutsk 677980 (Russian Federation)

    2013-02-15

    One of the main goals of investigations using present and future giant extensive air shower (EAS) arrays is the mass composition of ultra-high energy cosmic rays (UHECRs). A new approach to the problem is presented, combining the analysis of arrival directions with the statistical test of the paired EAS samples. One of the ideas of the method is to search for possible correlations between UHECR masses and their separate sources; for instance, if there are two sources in different areas of the celestial sphere injecting different nuclei, but the fluxes are comparable so that arrival directions are isotropic, then the aim is to reveal a difference in the mass composition of cosmic-ray fluxes. The method is based on a non-parametric statistical test-the Wilcoxon signed-rank routine-which does not depend on the populations fitting any parameterized distributions. Two particular algorithms are proposed: first, using measurements of the depth of the EAS maximum position in the atmosphere; and second, relying on the age variance of air showers initiated by different primary particles. The formulated method is applied to the Yakutsk array data, in order to demonstrate the possibility of searching for a difference in average mass composition between the two UHECR sets, arriving particularly from the supergalactic plane and a complementary region.

  2. LARGE-SCALE FLOWS IN PROMINENCE CAVITIES

    International Nuclear Information System (INIS)

    Schmit, D. J.; Gibson, S. E.; Tomczyk, S.; Reeves, K. K.; Sterling, Alphonse C.; Brooks, D. H.; Williams, D. R.; Tripathi, D.

    2009-01-01

    Regions of rarefied density often form cavities above quiescent prominences. We observed two different cavities with the Coronal Multichannel Polarimeter on 2005 April 21 and with Hinode/EIS on 2008 November 8. Inside both of these cavities, we find coherent velocity structures based on spectral Doppler shifts. These flows have speeds of 5-10 km s -1 , occur over length scales of tens of megameters, and persist for at least 1 hr. Flows in cavities are an example of the nonstatic nature of quiescent structures in the solar atmosphere.

  3. Recent Progress in Large-Scale Structure

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    I will discuss recent progress in the understanding of how to model galaxy clustering. While recent analyses have focussed on the baryon acoustic oscillations as a probe of cosmology, galaxy redshift surveys contain a lot more information than the acoustic scale. In extracting this additional information three main issues need to be well understood: nonlinear evolution of matter fluctuations, galaxy bias and redshift-space distortions. I will present recent progress in modeling these three effects that pave the way to constraining cosmology and galaxy formation with increased precision.

  4. Large-scale cryopumping for controlled fusion

    International Nuclear Information System (INIS)

    Pittenger, L.C.

    1977-01-01

    Vacuum pumping by freezing out or otherwise immobilizing the pumped gas is an old concept. In several plasma physics experiments for controlled fusion research, cryopumping has been used to provide clean, ultrahigh vacua. Present day fusion research devices, which rely almost universally upon neutral beams for heating, are high gas throughput systems, the pumping of which is best accomplished by cryopumping in the high mass-flow, moderate-to-high vacuum regime. Cryopumping systems have been developed for neutral beam injection systems on several fusion experiments (HVTS, TFTR) and are being developed for the overall pumping of a large, high-throughput mirror containment experiment (MFTF). In operation, these large cryopumps will require periodic defrosting, some schemes for which are discussed, along with other operational considerations. The development of cryopumps for fusion reactors is begun with the TFTR and MFTF systems. Likely paths for necessary further development for power-producing reactors are also discussed

  5. Large Scale Demand Response of Thermostatic Loads

    DEFF Research Database (Denmark)

    Totu, Luminita Cristiana

    This study is concerned with large populations of residential thermostatic loads (e.g. refrigerators, air conditioning or heat pumps). The purpose is to gain control over the aggregate power consumption in order to provide balancing services for the electrical grid. Without affecting the temperat......This study is concerned with large populations of residential thermostatic loads (e.g. refrigerators, air conditioning or heat pumps). The purpose is to gain control over the aggregate power consumption in order to provide balancing services for the electrical grid. Without affecting....... The control architecture is defined by parsimonious communication requirements that also have a high level data privacy, and it furthermore guarantees a robust and secure local operation. Mathematical models are put forward, and the effectiveness is shown by numerical simulations. A case study of 10000...

  6. Large-scale cryopumping for controlled fusion

    Energy Technology Data Exchange (ETDEWEB)

    Pittenger, L.C.

    1977-07-25

    Vacuum pumping by freezing out or otherwise immobilizing the pumped gas is an old concept. In several plasma physics experiments for controlled fusion research, cryopumping has been used to provide clean, ultrahigh vacua. Present day fusion research devices, which rely almost universally upon neutral beams for heating, are high gas throughput systems, the pumping of which is best accomplished by cryopumping in the high mass-flow, moderate-to-high vacuum regime. Cryopumping systems have been developed for neutral beam injection systems on several fusion experiments (HVTS, TFTR) and are being developed for the overall pumping of a large, high-throughput mirror containment experiment (MFTF). In operation, these large cryopumps will require periodic defrosting, some schemes for which are discussed, along with other operational considerations. The development of cryopumps for fusion reactors is begun with the TFTR and MFTF systems. Likely paths for necessary further development for power-producing reactors are also discussed.

  7. Large scale calculations for hadron spectroscopy

    International Nuclear Information System (INIS)

    Rebbi, C.

    1985-01-01

    The talk reviews some recent Monte Carlo calculations for Quantum Chromodynamics, performed on Euclidean lattices of rather large extent. Purpose of the calculations is to provide accurate determinations of quantities, such as interquark potentials or mass eigenvalues, which are relevant for hadronic spectroscopy. Results obtained in quenched QCD on 16 3 x 32 lattices are illustrated, and a discussion of computational resources and techniques required for the calculations is presented. 18 refs.,3 figs., 2 tabs

  8. Underground large scale test facility for rocks

    International Nuclear Information System (INIS)

    Sundaram, P.N.

    1981-01-01

    This brief note discusses two advantages of locating the facility for testing rock specimens of large dimensions in an underground space. Such an environment can be made to contribute part of the enormous axial load and stiffness requirements needed to get complete stress-strain behavior. The high pressure vessel may also be located below the floor level since the lateral confinement afforded by the rock mass may help to reduce the thickness of the vessel

  9. Large scale flow in the dayside magnetosheath

    International Nuclear Information System (INIS)

    Crooker, N.U.; Siscoe, G.L.; Eastman, T.E.; Frank, L.A.; Zwickl, R.D.

    1984-01-01

    The degree of control over plasma flow direction exerted by the compressed magnetic field in the dayside magnetosheath is examined by comparing ISEE 1 LEPEDEA data with hydrodynamic and magnetohydrodynamic predictions. Measured flow directions projected toward the subsolar region pass within approx.1 R/sub E/ of the aberrated theoretical hydrodynamic stagnation point in 11 of 20 cases analyzed. The remaining nine cases pass within approx.2-3 R/sub E/ of the stagnation point. One case with large deflection has been studied in detail with large-time-resolution plasma and magnetic field data both from ISEE 1 and from ISEE 3, in the role of a solar wind monitor. The deflected flow is persitent over a period of 1 1/2 hours, and its direction is consistent with a stagnation point displacement resulting from increased, asymmetric magnetic field pressure contributions during periods of low Alfven Mach number, as predicted by Russell et al. Of the other eight cases with large deflections, four are associated with flux transfer events identified independently by Berchem and Russell. The observed deflections in these cases are consistent with either the subsolar merging line or the antiparallel merging hypothesis, but not exclusively with one or the other. The results relating to the formation of a stagnation line rather than a stagnation point are inconclusive

  10. Large Scale Reduction of Graphite Oxide Project

    Science.gov (United States)

    Calle, Carlos; Mackey, Paul; Falker, John; Zeitlin, Nancy

    2015-01-01

    This project seeks to develop an optical method to reduce graphite oxide into graphene efficiently and in larger formats than currently available. Current reduction methods are expensive, time-consuming or restricted to small, limited formats. Graphene has potential uses in ultracapacitors, energy storage, solar cells, flexible and light-weight circuits, touch screens, and chemical sensors. In addition, graphite oxide is a sustainable material that can be produced from any form of carbon, making this method environmentally friendly and adaptable for in-situ reduction.

  11. Large scale Brownian dynamics of confined suspensions of rigid particles

    Science.gov (United States)

    Sprinkle, Brennan; Balboa Usabiaga, Florencio; Patankar, Neelesh A.; Donev, Aleksandar

    2017-12-01

    We introduce methods for large-scale Brownian Dynamics (BD) simulation of many rigid particles of arbitrary shape suspended in a fluctuating fluid. Our method adds Brownian motion to the rigid multiblob method [F. Balboa Usabiaga et al., Commun. Appl. Math. Comput. Sci. 11(2), 217-296 (2016)] at a cost comparable to the cost of deterministic simulations. We demonstrate that we can efficiently generate deterministic and random displacements for many particles using preconditioned Krylov iterative methods, if kernel methods to efficiently compute the action of the Rotne-Prager-Yamakawa (RPY) mobility matrix and its "square" root are available for the given boundary conditions. These kernel operations can be computed with near linear scaling for periodic domains using the positively split Ewald method. Here we study particles partially confined by gravity above a no-slip bottom wall using a graphical processing unit implementation of the mobility matrix-vector product, combined with a preconditioned Lanczos iteration for generating Brownian displacements. We address a major challenge in large-scale BD simulations, capturing the stochastic drift term that arises because of the configuration-dependent mobility. Unlike the widely used Fixman midpoint scheme, our methods utilize random finite differences and do not require the solution of resistance problems or the computation of the action of the inverse square root of the RPY mobility matrix. We construct two temporal schemes which are viable for large-scale simulations, an Euler-Maruyama traction scheme and a trapezoidal slip scheme, which minimize the number of mobility problems to be solved per time step while capturing the required stochastic drift terms. We validate and compare these schemes numerically by modeling suspensions of boomerang-shaped particles sedimented near a bottom wall. Using the trapezoidal scheme, we investigate the steady-state active motion in dense suspensions of confined microrollers, whose

  12. Large Scale Asset Extraction for Urban Images

    KAUST Repository

    Affara, Lama Ahmed; Nan, Liangliang; Ghanem, Bernard; Wonka, Peter

    2016-01-01

    Object proposals are currently used for increasing the computational efficiency of object detection. We propose a novel adaptive pipeline for interleaving object proposals with object classification and use it as a formulation for asset detection. We first preprocess the images using a novel and efficient rectification technique. We then employ a particle filter approach to keep track of three priors, which guide proposed samples and get updated using classifier output. Tests performed on over 1000 urban images demonstrate that our rectification method is faster than existing methods without loss in quality, and that our interleaved proposal method outperforms current state-of-the-art. We further demonstrate that other methods can be improved by incorporating our interleaved proposals. © Springer International Publishing AG 2016.

  13. Large Scale Reduction of Graphite Oxide

    Data.gov (United States)

    National Aeronautics and Space Administration — This project seeks to develop an optical method to reduce graphite oxide into graphene efficiently and in larger formats than currently available. Current reduction...

  14. Large Scale Asset Extraction for Urban Images

    KAUST Repository

    Affara, Lama Ahmed

    2016-09-16

    Object proposals are currently used for increasing the computational efficiency of object detection. We propose a novel adaptive pipeline for interleaving object proposals with object classification and use it as a formulation for asset detection. We first preprocess the images using a novel and efficient rectification technique. We then employ a particle filter approach to keep track of three priors, which guide proposed samples and get updated using classifier output. Tests performed on over 1000 urban images demonstrate that our rectification method is faster than existing methods without loss in quality, and that our interleaved proposal method outperforms current state-of-the-art. We further demonstrate that other methods can be improved by incorporating our interleaved proposals. © Springer International Publishing AG 2016.

  15. FINAL REPORT (MILESTONE DATE 9/30/11) FOR SUBCONTRACT NO. B594099 NUMERICAL METHODS FOR LARGE-SCALE DATA FACTORIZATION

    Energy Technology Data Exchange (ETDEWEB)

    De Sterck, H

    2011-10-18

    The following work has been performed by PI Hans De Sterck and graduate student Manda Winlaw for the required tasks 1-5 (as listed in the Statement of Work). Graduate student Manda Winlaw has visited LLNL January 31-March 11, 2011 and May 23-August 19, 2010, working with Van Henson and Mike O'Hara on non-negative matrix factorizations (NMF). She has investigated the dense subgraph clustering algorithm from 'Finding Dense Subgraphs for Sparse Undirected, Directed, and Bipartite Graphs' by Chen and Saad, testing this method on several term-document matrices and adapting it to cluster based on the rank of the subgraphs instead of the density. Manda Winlaw was awarded a first prize in the annual LLNL summer student poster competition for a poster on her NMF research. PI Hans De Sterck has developed a new adaptive algebraic multigrid algorithm for computing a few dominant or minimal singular triplets of sparse rectangular matrices. This work builds on adaptive algebraic multigrid methods that were further developed by the PI and collaborators (including Sanders and Henson) for Markov chains. The method also combines and extends existing multigrid algorithms for the symmetric eigenproblem. The PI has visited LLNL February 22-25, 2011, and has given a CASC seminar 'Algebraic Multigrid for the Singular Value Problem' on this work on February 23, 2011. During his visit, he has discussed this work and related topics with Van Henson, Geoffrey Sanders, Panayot Vassilevski, and others. He has tested the algorithm on PDE matrices and on a term-document matrix, with promising initial results. Manda Winlaw has also started to work, with O'Hara, on estimating probability distributions over undirected graph edges. The goal is to estimate probabilistic models from sets of undirected graph edges for the purpose of prediction, anomaly detection and support to supervised learning. Graduate student Manda Winlaw is writing a paper on the results obtained with

  16. BILGO: Bilateral greedy optimization for large scale semidefinite programming

    KAUST Repository

    Hao, Zhifeng

    2013-10-03

    Many machine learning tasks (e.g. metric and manifold learning problems) can be formulated as convex semidefinite programs. To enable the application of these tasks on a large-scale, scalability and computational efficiency are considered as desirable properties for a practical semidefinite programming algorithm. In this paper, we theoretically analyze a new bilateral greedy optimization (denoted BILGO) strategy in solving general semidefinite programs on large-scale datasets. As compared to existing methods, BILGO employs a bilateral search strategy during each optimization iteration. In such an iteration, the current semidefinite matrix solution is updated as a bilateral linear combination of the previous solution and a suitable rank-1 matrix, which can be efficiently computed from the leading eigenvector of the descent direction at this iteration. By optimizing for the coefficients of the bilateral combination, BILGO reduces the cost function in every iteration until the KKT conditions are fully satisfied, thus, it tends to converge to a global optimum. In fact, we prove that BILGO converges to the global optimal solution at a rate of O(1/k), where k is the iteration counter. The algorithm thus successfully combines the efficiency of conventional rank-1 update algorithms and the effectiveness of gradient descent. Moreover, BILGO can be easily extended to handle low rank constraints. To validate the effectiveness and efficiency of BILGO, we apply it to two important machine learning tasks, namely Mahalanobis metric learning and maximum variance unfolding. Extensive experimental results clearly demonstrate that BILGO can solve large-scale semidefinite programs efficiently.

  17. BILGO: Bilateral greedy optimization for large scale semidefinite programming

    KAUST Repository

    Hao, Zhifeng; Yuan, Ganzhao; Ghanem, Bernard

    2013-01-01

    Many machine learning tasks (e.g. metric and manifold learning problems) can be formulated as convex semidefinite programs. To enable the application of these tasks on a large-scale, scalability and computational efficiency are considered as desirable properties for a practical semidefinite programming algorithm. In this paper, we theoretically analyze a new bilateral greedy optimization (denoted BILGO) strategy in solving general semidefinite programs on large-scale datasets. As compared to existing methods, BILGO employs a bilateral search strategy during each optimization iteration. In such an iteration, the current semidefinite matrix solution is updated as a bilateral linear combination of the previous solution and a suitable rank-1 matrix, which can be efficiently computed from the leading eigenvector of the descent direction at this iteration. By optimizing for the coefficients of the bilateral combination, BILGO reduces the cost function in every iteration until the KKT conditions are fully satisfied, thus, it tends to converge to a global optimum. In fact, we prove that BILGO converges to the global optimal solution at a rate of O(1/k), where k is the iteration counter. The algorithm thus successfully combines the efficiency of conventional rank-1 update algorithms and the effectiveness of gradient descent. Moreover, BILGO can be easily extended to handle low rank constraints. To validate the effectiveness and efficiency of BILGO, we apply it to two important machine learning tasks, namely Mahalanobis metric learning and maximum variance unfolding. Extensive experimental results clearly demonstrate that BILGO can solve large-scale semidefinite programs efficiently.

  18. Photorealistic large-scale urban city model reconstruction.

    Science.gov (United States)

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite).

  19. Preface: Introductory Remarks: Linear Scaling Methods

    Science.gov (United States)

    Bowler, D. R.; Fattebert, J.-L.; Gillan, M. J.; Haynes, P. D.; Skylaris, C.-K.

    2008-07-01

    It has been just over twenty years since the publication of the seminal paper on molecular dynamics with ab initio methods by Car and Parrinello [1], and the contribution of density functional theory (DFT) and the related techniques to physics, chemistry, materials science, earth science and biochemistry has been huge. Nevertheless, significant improvements are still being made to the performance of these standard techniques; recent work suggests that speed improvements of one or even two orders of magnitude are possible [2]. One of the areas where major progress has long been expected is in O(N), or linear scaling, DFT, in which the computer effort is proportional to the number of atoms. Linear scaling DFT methods have been in development for over ten years [3] but we are now in an exciting period where more and more research groups are working on these methods. Naturally there is a strong and continuing effort to improve the efficiency of the methods and to make them more robust. But there is also a growing ambition to apply them to challenging real-life problems. This special issue contains papers submitted following the CECAM Workshop 'Linear-scaling ab initio calculations: applications and future directions', held in Lyon from 3-6 September 2007. A noteworthy feature of the workshop is that it included a significant number of presentations involving real applications of O(N) methods, as well as work to extend O(N) methods into areas of greater accuracy (correlated wavefunction methods, quantum Monte Carlo, TDDFT) and large scale computer architectures. As well as explicitly linear scaling methods, the conference included presentations on techniques designed to accelerate and improve the efficiency of standard (that is non-linear-scaling) methods; this highlights the important question of crossover—that is, at what size of system does it become more efficient to use a linear-scaling method? As well as fundamental algorithmic questions, this brings up

  20. Factors selection in landslide susceptibility modelling on large scale following the gis matrix method: application to the river Beiro basin (Spain

    Directory of Open Access Journals (Sweden)

    D. Costanzo

    2012-02-01

    Full Text Available A procedure to select the controlling factors connected to the slope instability has been defined. It allowed us to assess the landslide susceptibility in the Rio Beiro basin (about 10 km2 over the northeastern area of the city of Granada (Spain. Field and remote (Google EarthTM recognition techniques allowed us to generate a landslide inventory consisting in 127 phenomena. To discriminate between stable and unstable conditions, a diagnostic area had been chosen as the one limited to the crown and the toe of the scarp of the landslide. 15 controlling or determining factors have been defined considering topographic, geologic, geomorphologic and pedologic available data. Univariate tests, using both association coefficients and validation results of single-variable susceptibility models, allowed us to select the best predictors, which were combined for the unique conditions analysis. For each of the five recognised landslide typologies, susceptibility maps for the best models were prepared. In order to verify both the goodness of fit and the prediction skill of the susceptibility models, two different validation procedures were applied and compared. Both procedures are based on a random partition of the landslide archive for producing a test and a training subset. The first method is based on the analysis of the shape of the success and prediction rate curves, which are quantitatively analysed exploiting two morphometric indexes. The second method is based on the analysis of the degree of fit, by considering the relative error between the intersected target landslides by each of the different susceptibility classes in which the study area was partitioned. Both the validation procedures confirmed a very good predictive performance of the susceptibility models and of the actual procedure followed to select the controlling factors.

  1. Large Scale Experiments on Spacecraft Fire Safety

    DEFF Research Database (Denmark)

    Urban, David L.; Ruff, Gary A.; Minster, Olivier

    2012-01-01

    -based microgravity facilities or has been limited to very small fuel samples. Still, the work conducted to date has shown that fire behaviour in low-gravity is very different from that in normal-gravity, with differences observed for flammability limits, ignition delay, flame spread behaviour, flame colour and flame......Full scale fire testing complemented by computer modelling has provided significant knowhow about the risk, prevention and suppression of fire in terrestrial systems (cars, ships, planes, buildings, mines, and tunnels). In comparison, no such testing has been carried out for manned spacecraft due...... to the complexity, cost and risk associ-ated with operating a long duration fire safety experiment of a relevant size in microgravity. Therefore, there is currently a gap in knowledge of fire behaviour in spacecraft. The entire body of low-gravity fire research has either been conducted in short duration ground...

  2. Responses in large-scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Barreira, Alexandre; Schmidt, Fabian, E-mail: barreira@MPA-Garching.MPG.DE, E-mail: fabians@MPA-Garching.MPG.DE [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany)

    2017-06-01

    We introduce a rigorous definition of general power-spectrum responses as resummed vertices with two hard and n soft momenta in cosmological perturbation theory. These responses measure the impact of long-wavelength perturbations on the local small-scale power spectrum. The kinematic structure of the responses (i.e., their angular dependence) can be decomposed unambiguously through a ''bias'' expansion of the local power spectrum, with a fixed number of physical response coefficients , which are only a function of the hard wavenumber k . Further, the responses up to n -th order completely describe the ( n +2)-point function in the squeezed limit, i.e. with two hard and n soft modes, which one can use to derive the response coefficients. This generalizes previous results, which relate the angle-averaged squeezed limit to isotropic response coefficients. We derive the complete expression of first- and second-order responses at leading order in perturbation theory, and present extrapolations to nonlinear scales based on simulation measurements of the isotropic response coefficients. As an application, we use these results to predict the non-Gaussian part of the angle-averaged matter power spectrum covariance Cov{sup NG}{sub ℓ=0}( k {sub 1}, k {sub 2}), in the limit where one of the modes, say k {sub 2}, is much smaller than the other. Without any free parameters, our model results are in very good agreement with simulations for k {sub 2} ∼< 0.06 h Mpc{sup −1}, and for any k {sub 1} ∼> 2 k {sub 2}. The well-defined kinematic structure of the power spectrum response also permits a quick evaluation of the angular dependence of the covariance matrix. While we focus on the matter density field, the formalism presented here can be generalized to generic tracers such as galaxies.

  3. Responses in large-scale structure

    Science.gov (United States)

    Barreira, Alexandre; Schmidt, Fabian

    2017-06-01

    We introduce a rigorous definition of general power-spectrum responses as resummed vertices with two hard and n soft momenta in cosmological perturbation theory. These responses measure the impact of long-wavelength perturbations on the local small-scale power spectrum. The kinematic structure of the responses (i.e., their angular dependence) can be decomposed unambiguously through a ``bias'' expansion of the local power spectrum, with a fixed number of physical response coefficients, which are only a function of the hard wavenumber k. Further, the responses up to n-th order completely describe the (n+2)-point function in the squeezed limit, i.e. with two hard and n soft modes, which one can use to derive the response coefficients. This generalizes previous results, which relate the angle-averaged squeezed limit to isotropic response coefficients. We derive the complete expression of first- and second-order responses at leading order in perturbation theory, and present extrapolations to nonlinear scales based on simulation measurements of the isotropic response coefficients. As an application, we use these results to predict the non-Gaussian part of the angle-averaged matter power spectrum covariance CovNGl=0(k1,k2), in the limit where one of the modes, say k2, is much smaller than the other. Without any free parameters, our model results are in very good agreement with simulations for k2 lesssim 0.06 h Mpc-1, and for any k1 gtrsim 2k2. The well-defined kinematic structure of the power spectrum response also permits a quick evaluation of the angular dependence of the covariance matrix. While we focus on the matter density field, the formalism presented here can be generalized to generic tracers such as galaxies.

  4. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    Science.gov (United States)

    Dednam, W.; Botha, A. E.

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution

  5. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    International Nuclear Information System (INIS)

    Dednam, W; Botha, A E

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution

  6. SIMON: Remote collaboration system based on large scale simulation

    International Nuclear Information System (INIS)

    Sugawara, Akihiro; Kishimoto, Yasuaki

    2003-01-01

    Development of SIMON (SImulation MONitoring) system is described. SIMON aims to investigate many physical phenomena of tokamak type nuclear fusion plasma by simulation and to exchange information and to carry out joint researches with scientists in the world using internet. The characteristics of SIMON are followings; 1) decrease load of simulation by trigger sending method, 2) visualization of simulation results and hierarchical structure of analysis, 3) decrease of number of license by using command line when software is used, 4) improvement of support for using network of simulation data output by use of HTML (Hyper Text Markup Language), 5) avoidance of complex built-in work in client part and 6) small-sized and portable software. The visualization method of large scale simulation, remote collaboration system by HTML, trigger sending method, hierarchical analytical method, introduction into three-dimensional electromagnetic transportation code and technologies of SIMON system are explained. (S.Y.)

  7. Large-scale modelling of neuronal systems

    International Nuclear Information System (INIS)

    Castellani, G.; Verondini, E.; Giampieri, E.; Bersani, F.; Remondini, D.; Milanesi, L.; Zironi, I.

    2009-01-01

    The brain is, without any doubt, the most, complex system of the human body. Its complexity is also due to the extremely high number of neurons, as well as the huge number of synapses connecting them. Each neuron is capable to perform complex tasks, like learning and memorizing a large class of patterns. The simulation of large neuronal systems is challenging for both technological and computational reasons, and can open new perspectives for the comprehension of brain functioning. A well-known and widely accepted model of bidirectional synaptic plasticity, the BCM model, is stated by a differential equation approach based on bistability and selectivity properties. We have modified the BCM model extending it from a single-neuron to a whole-network model. This new model is capable to generate interesting network topologies starting from a small number of local parameters, describing the interaction between incoming and outgoing links from each neuron. We have characterized this model in terms of complex network theory, showing how this, learning rule can be a support For network generation.

  8. Large scale chromatographic separations using continuous displacement chromatography (CDC)

    International Nuclear Information System (INIS)

    Taniguchi, V.T.; Doty, A.W.; Byers, C.H.

    1988-01-01

    A process for large scale chromatographic separations using a continuous chromatography technique is described. The process combines the advantages of large scale batch fixed column displacement chromatography with conventional analytical or elution continuous annular chromatography (CAC) to enable large scale displacement chromatography to be performed on a continuous basis (CDC). Such large scale, continuous displacement chromatography separations have not been reported in the literature. The process is demonstrated with the ion exchange separation of a binary lanthanide (Nd/Pr) mixture. The process is, however, applicable to any displacement chromatography separation that can be performed using conventional batch, fixed column chromatography

  9. Large-scale synthesis of double cauliflower-like Sb{sub 2}S{sub 3} microcrystallines by hydrothermal method

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Lei [Key Laboratory for Soft Chemistry and Functional Materials, Ministry of Education (China); Xu, Hanyue [School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing 210094 (China); Han, Qiaofeng, E-mail: hanqiaofeng@njust.edu.cn [Key Laboratory for Soft Chemistry and Functional Materials, Ministry of Education (China); Wang, Xin [Key Laboratory for Soft Chemistry and Functional Materials, Ministry of Education (China)

    2013-09-25

    Highlights: •Highly uniform double cauliflower-like Sb{sub 2}S{sub 3} particles were synthesized via hydrothermal method. •Influence of reaction conditions on the morphology of the products was discussed. •Double cauliflower-like Sb{sub 2}S{sub 3} superstructures revealed broad spectrum response. -- Abstract: The double cauliflower-like Sb{sub 2}S{sub 3} superstructures assembled by nanorods were prepared using SbCl{sub 3} and Na{sub 2}S⋅9H{sub 2}O as raw materials, dodecyltrimethylammonium bromide (DTAB, C{sub 15}H{sub 31}BrN) as surfactant under acidic condition at 180 °C for 30 h. The structure, morphology and composition of the product were characterized by X-ray diffraction pattern (XRD), transmission electron microscopy (TEM), scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS) and energy diffraction spectroscopy (EDS). The effect of reaction conditions including temperature, reaction time and surfactants on the sample morphology was discussed and a possible mechanism for the formation of cauliflower-like Sb{sub 2}S{sub 3} was proposed. The cauliflower-like Sb{sub 2}S{sub 3} microcrystallines revealed broad spectrum response, which may have a good application prospect in solar energy utilization and photoelectric conversion fields.

  10. Large Scale Experiments on Spacecraft Fire Safety

    Science.gov (United States)

    Urban, David; Ruff, Gary A.; Minster, Olivier; Fernandez-Pello, A. Carlos; Tien, James S.; Torero, Jose L.; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; hide

    2012-01-01

    Full scale fire testing complemented by computer modelling has provided significant knowhow about the risk, prevention and suppression of fire in terrestrial systems (cars, ships, planes, buildings, mines, and tunnels). In comparison, no such testing has been carried out for manned spacecraft due to the complexity, cost and risk associated with operating a long duration fire safety experiment of a relevant size in microgravity. Therefore, there is currently a gap in knowledge of fire behaviour in spacecraft. The entire body of low-gravity fire research has either been conducted in short duration ground-based microgravity facilities or has been limited to very small fuel samples. Still, the work conducted to date has shown that fire behaviour in low-gravity is very different from that in normal gravity, with differences observed for flammability limits, ignition delay, flame spread behaviour, flame colour and flame structure. As a result, the prediction of the behaviour of fires in reduced gravity is at present not validated. To address this gap in knowledge, a collaborative international project, Spacecraft Fire Safety, has been established with its cornerstone being the development of an experiment (Fire Safety 1) to be conducted on an ISS resupply vehicle, such as the Automated Transfer Vehicle (ATV) or Orbital Cygnus after it leaves the ISS and before it enters the atmosphere. A computer modelling effort will complement the experimental effort. Although the experiment will need to meet rigorous safety requirements to ensure the carrier vehicle does not sustain damage, the absence of a crew removes the need for strict containment of combustion products. This will facilitate the possibility of examining fire behaviour on a scale that is relevant to spacecraft fire safety and will provide unique data for fire model validation. This unprecedented opportunity will expand the understanding of the fundamentals of fire behaviour in spacecraft. The experiment is being

  11. A Large-Scale Study of Misophonia.

    Science.gov (United States)

    Rouw, Romke; Erfanian, Mercede

    2018-03-01

    We aim to elucidate misophonia, a condition in which particular sounds elicit disproportionally strong aversive reactions. A large online study extensively surveyed personal, developmental, and clinical characteristics of over 300 misophonics. Most participants indicated that their symptoms started in childhood or early teenage years. Severity of misophonic responses increases over time. One third of participants reported having family members with similar symptoms. Half of our participants reported no comorbid clinical conditions, and the other half reported a variety of conditions. Only posttraumatic stress disorder (PTSD) was related to the severity of the misophonic symptoms. Remarkably, half of the participants reported experiencing euphoric, relaxing, and tingling sensations with particular sounds or sights, a relatively unfamiliar phenomenon called autonomous sensory meridian response (ASMR). It is unlikely that another "real" underlying clinical, psychiatric, or psychological disorder can explain away the misophonia. The possible relationship with PTSD and ASMR warrants further investigation. © 2017 Wiley Periodicals, Inc.

  12. EPFM verification by a large scale test

    International Nuclear Information System (INIS)

    Okamura, H.; Yagawa, G.; Hidaka, T.; Sato, M.; Urabe, Y.; Iida, M.

    1993-01-01

    Step B test was carried out as one of the elastic plastic fracture mechanics (EPFR) study in Japanese PTS integrity research project. In step B test bending load was applied to the large flat specimen with thermal shock. Tensile load was kept constant during the test. Estimated stable crack growth at the deepest point of the crack was 3 times larger than the experimental value in the previous analysis. In order to diminish the difference between them from the point of FEM modeling, more precise FEM mesh was introduced. According to the new analysis, the difference considerably decreased. That is, stable crack growth evaluation was improved by adopting precise FEM model near the crack tip and the difference was almost same order as that in the NKS4-1 test analysis by MPA. 8 refs., 17 figs., 5 tabs

  13. Goethite Bench-scale and Large-scale Preparation Tests

    Energy Technology Data Exchange (ETDEWEB)

    Josephson, Gary B.; Westsik, Joseph H.

    2011-10-23

    The Hanford Waste Treatment and Immobilization Plant (WTP) is the keystone for cleanup of high-level radioactive waste from our nation's nuclear defense program. The WTP will process high-level waste from the Hanford tanks and produce immobilized high-level waste glass for disposal at a national repository, low activity waste (LAW) glass, and liquid effluent from the vitrification off-gas scrubbers. The liquid effluent will be stabilized into a secondary waste form (e.g. grout-like material) and disposed on the Hanford site in the Integrated Disposal Facility (IDF) along with the low-activity waste glass. The major long-term environmental impact at Hanford results from technetium that volatilizes from the WTP melters and finally resides in the secondary waste. Laboratory studies have indicated that pertechnetate ({sup 99}TcO{sub 4}{sup -}) can be reduced and captured into a solid solution of {alpha}-FeOOH, goethite (Um 2010). Goethite is a stable mineral and can significantly retard the release of technetium to the environment from the IDF. The laboratory studies were conducted using reaction times of many days, which is typical of environmental subsurface reactions that were the genesis of this new process. This study was the first step in considering adaptation of the slow laboratory steps to a larger-scale and faster process that could be conducted either within the WTP or within the effluent treatment facility (ETF). Two levels of scale-up tests were conducted (25x and 400x). The largest scale-up produced slurries of Fe-rich precipitates that contained rhenium as a nonradioactive surrogate for {sup 99}Tc. The slurries were used in melter tests at Vitreous State Laboratory (VSL) to determine whether captured rhenium was less volatile in the vitrification process than rhenium in an unmodified feed. A critical step in the technetium immobilization process is to chemically reduce Tc(VII) in the pertechnetate (TcO{sub 4}{sup -}) to Tc(Iv)by reaction with the

  14. Treatments of tilapia (Oreochromis niloticus) using nitric oxide for quality improvement: Establishing a potential method for large-scale processing of farmed fish.

    Science.gov (United States)

    Wang, Zi-Chao; Yan, Yuzhen; Su, Ping; Zhao, Mou-Ming; Xia, Ning; Chen, De-Wei

    2018-07-01

    To find a succedaneum of present methods for slaughtering tilapia, we have demonstrated the influence of nitric oxide (NO) (saturated NO solution) through euthanasia before slaughter on the animal welfare and muscle color of tilapia. The results suggested that NO euthanasia significantly improved the animal welfare and muscle color. Besides, the investigation of NO postmortem treatment on the muscle color and color stability of tilapia fillets suggested that NO postmortem treatment not only improved the muscle color and color stability but also prolonged the shelf-life of tilapia fillets during the refrigerated storage. To further investigate the effect of NO euthanasia on the quality of tilapia fillets and to estimate the safety of NO treatments (NO euthanasia and NO postmortem treatment) for the application of NO treatments in industrial manufacturing of tilapia and possibly of other fish species. NO euthanasia was adopted in this study following a simulated fish processing line. HbNO and MbNO values were measured to clarify the mechanism and process of NO euthanasia. The blood parameters, muscle pH, rigor index, drip loss and total volatile basic nitrogen (TVB-N) values were measured to evaluate the quality of the fillets obtained from NO euthanized tilapia. Besides, the nitrate (NO 3 - ) levels in the muscles after the refrigerated storage were detected to estimate the food safety of both NO euthanasia and NO postmortem treatment. Fillets obtained from the tilapia euthanized by NO showed a later reduction of muscle pH, a later onset of rigor mortis postmortem and less drip loss during the refrigerated storage than control. NO euthanasia caused less TVB-N than control and prolonged the shelf life of tilapia fillets. Moreover, the NO 3 - levels in the muscles of both NO euthanasia and NO postmortem treatment after the refrigerated storage were below the maximum permitted limit. Both the NO euthanasia and NO postmortem treatment are suitable for improving the

  15. Benefits of transactive memory systems in large-scale development

    OpenAIRE

    Aivars, Sablis

    2016-01-01

    Context. Large-scale software development projects are those consisting of a large number of teams, maybe even spread across multiple locations, and working on large and complex software tasks. That means that neither a team member individually nor an entire team holds all the knowledge about the software being developed and teams have to communicate and coordinate their knowledge. Therefore, teams and team members in large-scale software development projects must acquire and manage expertise...

  16. Large Scale Simulations of the Euler Equations on GPU Clusters

    KAUST Repository

    Liebmann, Manfred

    2010-08-01

    The paper investigates the scalability of a parallel Euler solver, using the Vijayasundaram method, on a GPU cluster with 32 Nvidia Geforce GTX 295 boards. The aim of this research is to enable large scale fluid dynamics simulations with up to one billion elements. We investigate communication protocols for the GPU cluster to compensate for the slow Gigabit Ethernet network between the GPU compute nodes and to maintain overall efficiency. A diesel engine intake-port and a nozzle, meshed in different resolutions, give good real world examples for the scalability tests on the GPU cluster. © 2010 IEEE.

  17. Irradiation of onions on a large scale

    International Nuclear Information System (INIS)

    Kawashima, Koji; Hayashi, Toru; Uozumi, J.; Sugimoto, Toshio; Aoki, Shohei

    1984-01-01

    A large number of onions of var. Kitamiki and Ohotsuku were irradiated in September followed by storage at 0 deg C or 5 deg C. The onions were shifted from cold-storage facilities to room temperature in mid-March or in mid-April in the following year. Their sprouting, rooting, spoilage characteristics and sugar content were observed during storage at room temperature. Most of the unirradiated onions sprouted either outside or inside bulbs during storage at room temperature, and almost all of the irradiated ones showed small buds with browning inside the bulb in mid-April irrespective of the storage temperature. Rooting and/or expansion of bottom were observed in the unirradiated samples. Although the irradiated materials did not have root, they showed expansion of bottom to some extent. Both the irradiated and unirradiated onions spoiled slightly unless they sprouted, and sprouted onions got easily spoiled. There was no difference in the glucose content between the unirradiated and irradiated onions, but the irradiated ones yielded higher sucrose content when stored at room temperature. Irradiation treatment did not have an obvious effect on the quality of freeze-dried onion slices. (author)

  18. Superconducting materials for large scale applications

    International Nuclear Information System (INIS)

    Scanlan, Ronald M.; Malozemoff, Alexis P.; Larbalestier, David C.

    2004-01-01

    Significant improvements in the properties of superconducting materials have occurred recently. These improvements are being incorporated into the latest generation of wires, cables, and tapes that are being used in a broad range of prototype devices. These devices include new, high field accelerator and NMR magnets, magnets for fusion power experiments, motors, generators, and power transmission lines. These prototype magnets are joining a wide array of existing applications that utilize the unique capabilities of superconducting magnets:accelerators such as the Large Hadron Collider, fusion experiments such as ITER, 930 MHz NMR, and 4 Tesla MRI. In addition, promising new materials such as MgB2 have been discovered and are being studied in order to assess their potential for new applications. In this paper, we will review the key developments that are leading to these new applications for superconducting materials. In some cases, the key factor is improved understanding or development of materials with significantly improved properties. An example of the former is the development of Nb3Sn for use in high field magnets for accelerators. In other cases, the development is being driven by the application. The aggressive effort to develop HTS tapes is being driven primarily by the need for materials that can operate at temperatures of 50 K and higher. The implications of these two drivers for further developments will be discussed. Finally, we will discuss the areas where further improvements are needed in order for new applications to be realized

  19. Software for large scale tracking studies

    International Nuclear Information System (INIS)

    Niederer, J.

    1984-05-01

    Over the past few years, Brookhaven accelerator physicists have been adapting particle tracking programs in planning local storage rings, and lately for SSC reference designs. In addition, the Laboratory is actively considering upgrades to its AGS capabilities aimed at higher proton intensity, polarized proton beams, and heavy ion acceleration. Further activity concerns heavy ion transfer, a proposed booster, and most recently design studies for a heavy ion collider to join to this complex. Circumstances have thus encouraged a search for common features among design and modeling programs and their data, and the corresponding controls efforts among present and tentative machines. Using a version of PATRICIA with nonlinear forces as a vehicle, we have experimented with formal ways to describe accelerator lattice problems to computers as well as to speed up the calculations for large storage ring models. Code treated by straightforward reorganization has served for SSC explorations. The representation work has led to a relational data base centered program, LILA, which has desirable properties for dealing with the many thousands of rapidly changing variables in tracking and other model programs. 13 references

  20. Superconducting materials for large scale applications

    Energy Technology Data Exchange (ETDEWEB)

    Scanlan, Ronald M.; Malozemoff, Alexis P.; Larbalestier, David C.

    2004-05-06

    Significant improvements in the properties ofsuperconducting materials have occurred recently. These improvements arebeing incorporated into the latest generation of wires, cables, and tapesthat are being used in a broad range of prototype devices. These devicesinclude new, high field accelerator and NMR magnets, magnets for fusionpower experiments, motors, generators, and power transmission lines.These prototype magnets are joining a wide array of existing applicationsthat utilize the unique capabilities of superconducting magnets:accelerators such as the Large Hadron Collider, fusion experiments suchas ITER, 930 MHz NMR, and 4 Tesla MRI. In addition, promising newmaterials such as MgB2 have been discovered and are being studied inorder to assess their potential for new applications. In this paper, wewill review the key developments that are leading to these newapplications for superconducting materials. In some cases, the key factoris improved understanding or development of materials with significantlyimproved properties. An example of the former is the development of Nb3Snfor use in high field magnets for accelerators. In other cases, thedevelopment is being driven by the application. The aggressive effort todevelop HTS tapes is being driven primarily by the need for materialsthat can operate at temperatures of 50 K and higher. The implications ofthese two drivers for further developments will be discussed. Finally, wewill discuss the areas where further improvements are needed in order fornew applications to be realized.

  1. Large Scale Testing of Drystone Retaining Structures

    OpenAIRE

    Mundell, Chris

    2009-01-01

    Drystone walls have been used extensively around the world as earth retaining structures wherever suitable stone is found. Commonly about 0.6m thick (irrespective of height), there are about 9000km of drystone retaining walls on the UK road network alone, mostly built in the 19th and early 20th centuries, with an estimated replacement value in excess of £1 billion[1]. Drystone wall design is traditionally empirical, based on local knowledge of what has worked in the past. Methods vary from re...

  2. Large scale anisotropy studies with the Auger Observatory

    International Nuclear Information System (INIS)

    Santos, E.M.; Letessier-Selvon, A.

    2006-01-01

    With the increasing Auger surface array data sample of the highest energy cosmic rays, large scale anisotropy studies at this part of the spectrum become a promising path towards the understanding of the origin of ultra-high energy cosmic particles. We describe the methods underlying the search for distortions in the cosmic rays arrival directions over large angular scales, that is, bigger than those commonly employed in the search for correlations with point-like sources. The widely used tools, known as coverage maps, are described and some of the issues involved in their calculations are presented through Monte Carlo based studies. Coverage computation requires a deep knowledge on the local detection efficiency, including the influence of weather parameters like temperature and pressure. Particular attention is devoted to a new proposed method to extract the coverage, based upon the assumption of time factorization of an extensive air shower detector acceptance. We use Auger monitoring data to test the goodness of such a hypothesis. We finally show the necessity of using more than one coverage to extract any possible anisotropic pattern on the sky, by pointing to some of the biases present in commonly used methods based, for example, on the scrambling of the UTC arrival times for each event. (author)

  3. SCALE INTERACTION IN A MIXING LAYER. THE ROLE OF THE LARGE-SCALE GRADIENTS

    KAUST Repository

    Fiscaletti, Daniele; Attili, Antonio; Bisetti, Fabrizio; Elsinga, Gerrit E.

    2015-01-01

    from physical considerations we would expect the scales to interact in a qualitatively similar way within the flow and across different turbulent flows. Therefore, instead of the large-scale fluctuations, the large-scale gradients modulation of the small scales has been additionally investigated.

  4. Large-scale lateral nanowire arrays nanogenerators

    Science.gov (United States)

    Wang, Zhong L; Xu, Chen; Qin, Yong; Zhu, Guang; Yang, Rusen; Hu, Youfan; Zhang, Yan

    2014-01-07

    In a method of making a generating device, a plurality of spaced apart elongated seen members are deposited onto a surface of a flexible non-conductive substrate. An elongated conductive layer is applied to a top surface and a first side of each seed member, thereby leaving an exposed second side opposite the first side. A plurality of elongated piezoelectric nanostructures is grown laterally from the second side of each seed layer. A second conductive material is deposited onto the substrate adjacent each elongated first conductive layer so as to be soupled the distal end of each of the plurality of elongated piezoelectric nanostructures. The second conductive material is selected so as to form a Schottky barrier between the second conductive material and the distal end of each of the plurality of elongated piezoelectric nanostructures and so as to form an electrical contact with the first conductive layer.

  5. Deep Feature Learning and Cascaded Classifier for Large Scale Data

    DEFF Research Database (Denmark)

    Prasoon, Adhish

    from data rather than having a predefined feature set. We explore deep learning approach of convolutional neural network (CNN) for segmenting three dimensional medical images. We propose a novel system integrating three 2D CNNs, which have a one-to-one association with the xy, yz and zx planes of 3D......This thesis focuses on voxel/pixel classification based approaches for image segmentation. The main application is segmentation of articular cartilage in knee MRIs. The first major contribution of the thesis deals with large scale machine learning problems. Many medical imaging problems need huge...... amount of training data to cover sufficient biological variability. Learning methods scaling badly with number of training data points cannot be used in such scenarios. This may restrict the usage of many powerful classifiers having excellent generalization ability. We propose a cascaded classifier which...

  6. Large-scale biophysical evaluation of protein PEGylation effects

    DEFF Research Database (Denmark)

    Vernet, Erik; Popa, Gina; Pozdnyakova, Irina

    2016-01-01

    PEGylation is the most widely used method to chemically modify protein biopharmaceuticals, but surprisingly limited public data is available on the biophysical effects of protein PEGylation. Here we report the first large-scale study, with site-specific mono-PEGylation of 15 different proteins...... of PEGylation on the thermal stability of a protein based on data generated by circular dichroism (CD), differential scanning calorimetry (DSC), or differential scanning fluorimetry (DSF). In addition, DSF was validated as a fast and inexpensive screening method for thermal unfolding studies of PEGylated...... proteins. Multivariate data analysis revealed clear trends in biophysical properties upon PEGylation for a subset of proteins, although no universal trends were found. Taken together, these findings are important in the consideration of biophysical methods and evaluation of second...

  7. Reconstructing Information in Large-Scale Structure via Logarithmic Mapping

    Science.gov (United States)

    Szapudi, Istvan

    We propose to develop a new method to extract information from large-scale structure data combining two-point statistics and non-linear transformations; before, this information was available only with substantially more complex higher-order statistical methods. Initially, most of the cosmological information in large-scale structure lies in two-point statistics. With non- linear evolution, some of that useful information leaks into higher-order statistics. The PI and group has shown in a series of theoretical investigations how that leakage occurs, and explained the Fisher information plateau at smaller scales. This plateau means that even as more modes are added to the measurement of the power spectrum, the total cumulative information (loosely speaking the inverse errorbar) is not increasing. Recently we have shown in Neyrinck et al. (2009, 2010) that a logarithmic (and a related Gaussianization or Box-Cox) transformation on the non-linear Dark Matter or galaxy field reconstructs a surprisingly large fraction of this missing Fisher information of the initial conditions. This was predicted by the earlier wave mechanical formulation of gravitational dynamics by Szapudi & Kaiser (2003). The present proposal is focused on working out the theoretical underpinning of the method to a point that it can be used in practice to analyze data. In particular, one needs to deal with the usual real-life issues of galaxy surveys, such as complex geometry, discrete sam- pling (Poisson or sub-Poisson noise), bias (linear, or non-linear, deterministic, or stochastic), redshift distortions, pro jection effects for 2D samples, and the effects of photometric redshift errors. We will develop methods for weak lensing and Sunyaev-Zeldovich power spectra as well, the latter specifically targetting Planck. In addition, we plan to investigate the question of residual higher- order information after the non-linear mapping, and possible applications for cosmology. Our aim will be to work out

  8. Sensitivity of the scale partition for variational multiscale large-eddy simulation of channel flow

    NARCIS (Netherlands)

    Holmen, J.; Hughes, T.J.R.; Oberai, A.A.; Wells, G.N.

    2004-01-01

    The variational multiscale method has been shown to perform well for large-eddy simulation (LES) of turbulent flows. The method relies upon a partition of the resolved velocity field into large- and small-scale components. The subgrid model then acts only on the small scales of motion, unlike

  9. International safeguards in large scale nuclear facilities

    International Nuclear Information System (INIS)

    Gupta, D.; Heil, J.

    1977-01-01

    The trend in the energy sector in most of the industrialized areas of the world shows rather clearly, that the rate of installation of nuclear plants will be very high and that the largest possible units of nuclear material handling and storage facilities will be built. Various experiments and analyses of measurement methods relevant to safeguards, in typical nuclear facilities like a fuel reprocessing or a fabrication plant, have shown that the associated measurement errors as obtained under normal operating conditions are such that they are mainly dominated by systematic errors and that such systematic errors may lie in the range of percentages of the measured amount so that a material balance in such a plant could not normally be closed with high accuracy. The simplest way of going around the problem would be to increase the frequency of striking a material balance over a given period of time. This could however lead to an anormous increase in the technical and financial burden for the operator of a facility. The present paper analyses this problem in some detail for some facilities and shows that with a properly developed information system in such plants and a combination of containment, surveillance and accountancy measures, safeguards statements for relatively low significant amounts can be made with the attainable range of measurement accuracies

  10. Prospects for large scale electricity storage in Denmark

    DEFF Research Database (Denmark)

    Krog Ekman, Claus; Jensen, Søren Højgaard

    2010-01-01

    In a future power systems with additional wind power capacity there will be an increased need for large scale power management as well as reliable balancing and reserve capabilities. Different technologies for large scale electricity storage provide solutions to the different challenges arising w...

  11. Large-scale matrix-handling subroutines 'ATLAS'

    International Nuclear Information System (INIS)

    Tsunematsu, Toshihide; Takeda, Tatsuoki; Fujita, Keiichi; Matsuura, Toshihiko; Tahara, Nobuo

    1978-03-01

    Subroutine package ''ATLAS'' has been developed for handling large-scale matrices. The package is composed of four kinds of subroutines, i.e., basic arithmetic routines, routines for solving linear simultaneous equations and for solving general eigenvalue problems and utility routines. The subroutines are useful in large scale plasma-fluid simulations. (auth.)

  12. Large-scale Agricultural Land Acquisitions in West Africa | IDRC ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    This project will examine large-scale agricultural land acquisitions in nine West African countries -Burkina Faso, Guinea-Bissau, Guinea, Benin, Mali, Togo, Senegal, Niger, and Côte d'Ivoire. ... They will use the results to increase public awareness and knowledge about the consequences of large-scale land acquisitions.

  13. First Mile Challenges for Large-Scale IoT

    KAUST Repository

    Bader, Ahmed; Elsawy, Hesham; Gharbieh, Mohammad; Alouini, Mohamed-Slim; Adinoyi, Abdulkareem; Alshaalan, Furaih

    2017-01-01

    The Internet of Things is large-scale by nature. This is not only manifested by the large number of connected devices, but also by the sheer scale of spatial traffic intensity that must be accommodated, primarily in the uplink direction. To that end

  14. Amplification of large-scale magnetic field in nonhelical magnetohydrodynamics

    KAUST Repository

    Kumar, Rohit

    2017-08-11

    It is typically assumed that the kinetic and magnetic helicities play a crucial role in the growth of large-scale dynamo. In this paper, we demonstrate that helicity is not essential for the amplification of large-scale magnetic field. For this purpose, we perform nonhelical magnetohydrodynamic (MHD) simulation, and show that the large-scale magnetic field can grow in nonhelical MHD when random external forcing is employed at scale 1/10 the box size. The energy fluxes and shell-to-shell transfer rates computed using the numerical data show that the large-scale magnetic energy grows due to the energy transfers from the velocity field at the forcing scales.

  15. Traffic assignment models in large-scale applications

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær

    the potential of the method proposed and the possibility to use individual-based GPS units for travel surveys in real-life large-scale multi-modal networks. Congestion is known to highly influence the way we act in the transportation network (and organise our lives), because of longer travel times...... of observations of actual behaviour to obtain estimates of the (monetary) value of different travel time components, thereby increasing the behavioural realism of largescale models. vii The generation of choice sets is a vital component in route choice models. This is, however, not a straight-forward task in real......, but the reliability of the travel time also has a large impact on our travel choices. Consequently, in order to improve the realism of transport models, correct understanding and representation of two values that are related to the value of time (VoT) are essential: (i) the value of congestion (VoC), as the Vo...

  16. Risk Management Challenges in Large-scale Energy PSS

    DEFF Research Database (Denmark)

    Tegeltija, Miroslava; Oehmen, Josef; Kozin, Igor

    2017-01-01

    Probabilistic risk management approaches have a long tradition in engineering. A large variety of tools and techniques based on the probabilistic view of risk is available and applied in PSS practice. However, uncertainties that arise due to lack of knowledge and information are still missing...... adequate representations. We focus on a large-scale energy company in Denmark as one case of current product/servicesystems risk management best practices. We analyze their risk management process and investigate the tools they use in order to support decision making processes within the company. First, we...... identify the following challenges in the current risk management practices that are in line with literature: (1) current methods are not appropriate for the situations dominated by weak knowledge and information; (2) quality of traditional models in such situations is open to debate; (3) quality of input...

  17. Detecting differential protein expression in large-scale population proteomics

    Energy Technology Data Exchange (ETDEWEB)

    Ryu, Soyoung; Qian, Weijun; Camp, David G.; Smith, Richard D.; Tompkins, Ronald G.; Davis, Ronald W.; Xiao, Wenzhong

    2014-06-17

    Mass spectrometry-based high-throughput quantitative proteomics shows great potential in clinical biomarker studies, identifying and quantifying thousands of proteins in biological samples. However, methods are needed to appropriately handle issues/challenges unique to mass spectrometry data in order to detect as many biomarker proteins as possible. One issue is that different mass spectrometry experiments generate quite different total numbers of quantified peptides, which can result in more missing peptide abundances in an experiment with a smaller total number of quantified peptides. Another issue is that the quantification of peptides is sometimes absent, especially for less abundant peptides and such missing values contain the information about the peptide abundance. Here, we propose a Significance Analysis for Large-scale Proteomics Studies (SALPS) that handles missing peptide intensity values caused by the two mechanisms mentioned above. Our model has a robust performance in both simulated data and proteomics data from a large clinical study. Because varying patients’ sample qualities and deviating instrument performances are not avoidable for clinical studies performed over the course of several years, we believe that our approach will be useful to analyze large-scale clinical proteomics data.

  18. Disinformative data in large-scale hydrological modelling

    Directory of Open Access Journals (Sweden)

    A. Kauffeldt

    2013-07-01

    Full Text Available Large-scale hydrological modelling has become an important tool for the study of global and regional water resources, climate impacts, and water-resources management. However, modelling efforts over large spatial domains are fraught with problems of data scarcity, uncertainties and inconsistencies between model forcing and evaluation data. Model-independent methods to screen and analyse data for such problems are needed. This study aimed at identifying data inconsistencies in global datasets using a pre-modelling analysis, inconsistencies that can be disinformative for subsequent modelling. The consistency between (i basin areas for different hydrographic datasets, and (ii between climate data (precipitation and potential evaporation and discharge data, was examined in terms of how well basin areas were represented in the flow networks and the possibility of water-balance closure. It was found that (i most basins could be well represented in both gridded basin delineations and polygon-based ones, but some basins exhibited large area discrepancies between flow-network datasets and archived basin areas, (ii basins exhibiting too-high runoff coefficients were abundant in areas where precipitation data were likely affected by snow undercatch, and (iii the occurrence of basins exhibiting losses exceeding the potential-evaporation limit was strongly dependent on the potential-evaporation data, both in terms of numbers and geographical distribution. Some inconsistencies may be resolved by considering sub-grid variability in climate data, surface-dependent potential-evaporation estimates, etc., but further studies are needed to determine the reasons for the inconsistencies found. Our results emphasise the need for pre-modelling data analysis to identify dataset inconsistencies as an important first step in any large-scale study. Applying data-screening methods before modelling should also increase our chances to draw robust conclusions from subsequent

  19. Topographically Engineered Large Scale Nanostructures for Plasmonic Biosensing

    Science.gov (United States)

    Xiao, Bo; Pradhan, Sangram K.; Santiago, Kevin C.; Rutherford, Gugu N.; Pradhan, Aswini K.

    2016-04-01

    We demonstrate that a nanostructured metal thin film can achieve enhanced transmission efficiency and sharp resonances and use a large-scale and high-throughput nanofabrication technique for the plasmonic structures. The fabrication technique combines the features of nanoimprint and soft lithography to topographically construct metal thin films with nanoscale patterns. Metal nanogratings developed using this method show significantly enhanced optical transmission (up to a one-order-of-magnitude enhancement) and sharp resonances with full width at half maximum (FWHM) of ~15nm in the zero-order transmission using an incoherent white light source. These nanostructures are sensitive to the surrounding environment, and the resonance can shift as the refractive index changes. We derive an analytical method using a spatial Fourier transformation to understand the enhancement phenomenon and the sensing mechanism. The use of real-time monitoring of protein-protein interactions in microfluidic cells integrated with these nanostructures is demonstrated to be effective for biosensing. The perpendicular transmission configuration and large-scale structures provide a feasible platform without sophisticated optical instrumentation to realize label-free surface plasmon resonance (SPR) sensing.

  20. A convex optimization approach for solving large scale linear systems

    Directory of Open Access Journals (Sweden)

    Debora Cores

    2017-01-01

    Full Text Available The well-known Conjugate Gradient (CG method minimizes a strictly convex quadratic function for solving large-scale linear system of equations when the coefficient matrix is symmetric and positive definite. In this work we present and analyze a non-quadratic convex function for solving any large-scale linear system of equations regardless of the characteristics of the coefficient matrix. For finding the global minimizers, of this new convex function, any low-cost iterative optimization technique could be applied. In particular, we propose to use the low-cost globally convergent Spectral Projected Gradient (SPG method, which allow us to extend this optimization approach for solving consistent square and rectangular linear system, as well as linear feasibility problem, with and without convex constraints and with and without preconditioning strategies. Our numerical results indicate that the new scheme outperforms state-of-the-art iterative techniques for solving linear systems when the symmetric part of the coefficient matrix is indefinite, and also for solving linear feasibility problems.

  1. Scaling as an Organizational Method

    DEFF Research Database (Denmark)

    Papazu, Irina Maria Clara Hansen; Nelund, Mette

    2018-01-01

    Organization studies have shown limited interest in the part that scaling plays in organizational responses to climate change and sustainability. Moreover, while scales are viewed as central to the diagnosis of the organizational challenges posed by climate change and sustainability, the role...... turn something as immense as the climate into a small and manageable problem, thus making abstract concepts part of concrete, organizational practice....

  2. Implicit solvers for large-scale nonlinear problems

    International Nuclear Information System (INIS)

    Keyes, David E; Reynolds, Daniel R; Woodward, Carol S

    2006-01-01

    Computational scientists are grappling with increasingly complex, multi-rate applications that couple such physical phenomena as fluid dynamics, electromagnetics, radiation transport, chemical and nuclear reactions, and wave and material propagation in inhomogeneous media. Parallel computers with large storage capacities are paving the way for high-resolution simulations of coupled problems; however, hardware improvements alone will not prove enough to enable simulations based on brute-force algorithmic approaches. To accurately capture nonlinear couplings between dynamically relevant phenomena, often while stepping over rapid adjustments to quasi-equilibria, simulation scientists are increasingly turning to implicit formulations that require a discrete nonlinear system to be solved for each time step or steady state solution. Recent advances in iterative methods have made fully implicit formulations a viable option for solution of these large-scale problems. In this paper, we overview one of the most effective iterative methods, Newton-Krylov, for nonlinear systems and point to software packages with its implementation. We illustrate the method with an example from magnetically confined plasma fusion and briefly survey other areas in which implicit methods have bestowed important advantages, such as allowing high-order temporal integration and providing a pathway to sensitivity analyses and optimization. Lastly, we overview algorithm extensions under development motivated by current SciDAC applications

  3. Bonus algorithm for large scale stochastic nonlinear programming problems

    CERN Document Server

    Diwekar, Urmila

    2015-01-01

    This book presents the details of the BONUS algorithm and its real world applications in areas like sensor placement in large scale drinking water networks, sensor placement in advanced power systems, water management in power systems, and capacity expansion of energy systems. A generalized method for stochastic nonlinear programming based on a sampling based approach for uncertainty analysis and statistical reweighting to obtain probability information is demonstrated in this book. Stochastic optimization problems are difficult to solve since they involve dealing with optimization and uncertainty loops. There are two fundamental approaches used to solve such problems. The first being the decomposition techniques and the second method identifies problem specific structures and transforms the problem into a deterministic nonlinear programming problem. These techniques have significant limitations on either the objective function type or the underlying distributions for the uncertain variables. Moreover, these ...

  4. Exploiting large-scale correlations to detect continuous gravitational waves.

    Science.gov (United States)

    Pletsch, Holger J; Allen, Bruce

    2009-10-30

    Fully coherent searches (over realistic ranges of parameter space and year-long observation times) for unknown sources of continuous gravitational waves are computationally prohibitive. Less expensive hierarchical searches divide the data into shorter segments which are analyzed coherently, then detection statistics from different segments are combined incoherently. The novel method presented here solves the long-standing problem of how best to do the incoherent combination. The optimal solution exploits large-scale parameter-space correlations in the coherent detection statistic. Application to simulated data shows dramatic sensitivity improvements compared with previously available (ad hoc) methods, increasing the spatial volume probed by more than 2 orders of magnitude at lower computational cost.

  5. Large scale computing in theoretical physics: Example QCD

    International Nuclear Information System (INIS)

    Schilling, K.

    1986-01-01

    The limitations of the classical mathematical analysis of Newton and Leibniz appear to be more and more overcome by the power of modern computers. Large scale computing techniques - which resemble closely the methods used in simulations within statistical mechanics - allow to treat nonlinear systems with many degrees of freedom such as field theories in nonperturbative situations, where analytical methods do fail. The computation of the hadron spectrum within the framework of lattice QCD sets a demanding goal for the application of supercomputers in basic science. It requires both big computer capacities and clever algorithms to fight all the numerical evils that one encounters in the Euclidean world. The talk will attempt to describe both the computer aspects and the present state of the art of spectrum calculations within lattice QCD. (orig.)

  6. Large scale oil lease automation and electronic custody transfer

    International Nuclear Information System (INIS)

    Price, C.R.; Elmer, D.C.

    1995-01-01

    Typically, oil field production operations have only been automated at fields with long term production profiles and enhanced recovery. The automation generally consists of monitoring and control at the wellhead and centralized facilities. However, Union Pacific Resources Co. (UPRC) has successfully implemented a large scale automation program for rapid-decline primary recovery Austin Chalk wells where purchasers buy and transport oil from each individual wellsite. This project has resulted in two significant benefits. First, operators are using the system to re-engineer their work processes. Second, an inter-company team created a new electronic custody transfer method. This paper will describe: the progression of the company's automation objectives in the area; the field operator's interaction with the system, and the related benefits; the research and development of the new electronic custody transfer method

  7. Large-Scale 3D Printing: The Way Forward

    Science.gov (United States)

    Jassmi, Hamad Al; Najjar, Fady Al; Ismail Mourad, Abdel-Hamid

    2018-03-01

    Research on small-scale 3D printing has rapidly evolved, where numerous industrial products have been tested and successfully applied. Nonetheless, research on large-scale 3D printing, directed to large-scale applications such as construction and automotive manufacturing, yet demands a great a great deal of efforts. Large-scale 3D printing is considered an interdisciplinary topic and requires establishing a blended knowledge base from numerous research fields including structural engineering, materials science, mechatronics, software engineering, artificial intelligence and architectural engineering. This review article summarizes key topics of relevance to new research trends on large-scale 3D printing, particularly pertaining (1) technological solutions of additive construction (i.e. the 3D printers themselves), (2) materials science challenges, and (3) new design opportunities.

  8. Stability and Control of Large-Scale Dynamical Systems A Vector Dissipative Systems Approach

    CERN Document Server

    Haddad, Wassim M

    2011-01-01

    Modern complex large-scale dynamical systems exist in virtually every aspect of science and engineering, and are associated with a wide variety of physical, technological, environmental, and social phenomena, including aerospace, power, communications, and network systems, to name just a few. This book develops a general stability analysis and control design framework for nonlinear large-scale interconnected dynamical systems, and presents the most complete treatment on vector Lyapunov function methods, vector dissipativity theory, and decentralized control architectures. Large-scale dynami

  9. Temperature scaling method for Markov chains.

    Science.gov (United States)

    Crosby, Lonnie D; Windus, Theresa L

    2009-01-22

    The use of ab initio potentials in Monte Carlo simulations aimed at investigating the nucleation kinetics of water clusters is complicated by the computational expense of the potential energy determinations. Furthermore, the common desire to investigate the temperature dependence of kinetic properties leads to an urgent need to reduce the expense of performing simulations at many different temperatures. A method is detailed that allows a Markov chain (obtained via Monte Carlo) at one temperature to be scaled to other temperatures of interest without the need to perform additional large simulations. This Markov chain temperature-scaling (TeS) can be generally applied to simulations geared for numerous applications. This paper shows the quality of results which can be obtained by TeS and the possible quantities which may be extracted from scaled Markov chains. Results are obtained for a 1-D analytical potential for which the exact solutions are known. Also, this method is applied to water clusters consisting of between 2 and 5 monomers, using Dynamical Nucleation Theory to determine the evaporation rate constant for monomer loss. Although ab initio potentials are not utilized in this paper, the benefit of this method is made apparent by using the Dang-Chang polarizable classical potential for water to obtain statistical properties at various temperatures.

  10. Scale interactions in a mixing layer – the role of the large-scale gradients

    KAUST Repository

    Fiscaletti, D.

    2016-02-15

    © 2016 Cambridge University Press. The interaction between the large and the small scales of turbulence is investigated in a mixing layer, at a Reynolds number based on the Taylor microscale of , via direct numerical simulations. The analysis is performed in physical space, and the local vorticity root-mean-square (r.m.s.) is taken as a measure of the small-scale activity. It is found that positive large-scale velocity fluctuations correspond to large vorticity r.m.s. on the low-speed side of the mixing layer, whereas, they correspond to low vorticity r.m.s. on the high-speed side. The relationship between large and small scales thus depends on position if the vorticity r.m.s. is correlated with the large-scale velocity fluctuations. On the contrary, the correlation coefficient is nearly constant throughout the mixing layer and close to unity if the vorticity r.m.s. is correlated with the large-scale velocity gradients. Therefore, the small-scale activity appears closely related to large-scale gradients, while the correlation between the small-scale activity and the large-scale velocity fluctuations is shown to reflect a property of the large scales. Furthermore, the vorticity from unfiltered (small scales) and from low pass filtered (large scales) velocity fields tend to be aligned when examined within vortical tubes. These results provide evidence for the so-called \\'scale invariance\\' (Meneveau & Katz, Annu. Rev. Fluid Mech., vol. 32, 2000, pp. 1-32), and suggest that some of the large-scale characteristics are not lost at the small scales, at least at the Reynolds number achieved in the present simulation.

  11. Needs, opportunities, and options for large scale systems research

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  12. The Modified HZ Conjugate Gradient Algorithm for Large-Scale Nonsmooth Optimization.

    Science.gov (United States)

    Yuan, Gonglin; Sheng, Zhou; Liu, Wenjie

    2016-01-01

    In this paper, the Hager and Zhang (HZ) conjugate gradient (CG) method and the modified HZ (MHZ) CG method are presented for large-scale nonsmooth convex minimization. Under some mild conditions, convergent results of the proposed methods are established. Numerical results show that the presented methods can be better efficiency for large-scale nonsmooth problems, and several problems are tested (with the maximum dimensions to 100,000 variables).

  13. The Modified HZ Conjugate Gradient Algorithm for Large-Scale Nonsmooth Optimization.

    Directory of Open Access Journals (Sweden)

    Gonglin Yuan

    Full Text Available In this paper, the Hager and Zhang (HZ conjugate gradient (CG method and the modified HZ (MHZ CG method are presented for large-scale nonsmooth convex minimization. Under some mild conditions, convergent results of the proposed methods are established. Numerical results show that the presented methods can be better efficiency for large-scale nonsmooth problems, and several problems are tested (with the maximum dimensions to 100,000 variables.

  14. Large scale and big data processing and management

    CERN Document Server

    Sakr, Sherif

    2014-01-01

    Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments.The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-bas

  15. Metoder for Modellering, Simulering og Regulering af Større Termiske Processer anvendt i Sukkerproduktion. Methods for Modelling, Simulation and Control of Large Scale Thermal Systems Applied in Sugar Production

    DEFF Research Database (Denmark)

    Nielsen, Kirsten Mølgaard; Nielsen, Jens Frederik Dalsgaard

    The subject of this Ph.D. thesis is to investigate and develop methods for modelling, simulation and control applicable in large scale termal industrial plants. An ambition has been to evaluate the results in a physical process. Sugar production is well suited for the purpose. In collaboration...... simulator has been developed. The simulator handles the normal working conditions relevant to control engineers. A non-linear dynamic model based on mass and energy balances has been developed. The model parameters have been adjusted to data measured on a Danish sugar plant. The simulator consists...... of a computer, a data terminal and an electric interface corresponding to the interface at the sugar plant. The simulator is operating in realtime and thus a realistic test of controllers is possible. The idiomatic control methodology has been investigated developing a control concept for the evaporation...

  16. Phonetic search methods for large speech databases

    CERN Document Server

    Moyal, Ami; Tetariy, Ella; Gishri, Michal

    2013-01-01

    “Phonetic Search Methods for Large Databases” focuses on Keyword Spotting (KWS) within large speech databases. The brief will begin by outlining the challenges associated with Keyword Spotting within large speech databases using dynamic keyword vocabularies. It will then continue by highlighting the various market segments in need of KWS solutions, as well as, the specific requirements of each market segment. The work also includes a detailed description of the complexity of the task and the different methods that are used, including the advantages and disadvantages of each method and an in-depth comparison. The main focus will be on the Phonetic Search method and its efficient implementation. This will include a literature review of the various methods used for the efficient implementation of Phonetic Search Keyword Spotting, with an emphasis on the authors’ own research which entails a comparative analysis of the Phonetic Search method which includes algorithmic details. This brief is useful for resea...

  17. On the measurements of large scale solar velocity fields

    International Nuclear Information System (INIS)

    Andersen, B.N.

    1985-01-01

    A general mathematical formulation for the correction of the scattered light influence on solar Doppler shift measurements has been developed. This method has been applied to the straylight correction of measurements of solar rotation, limb effect, large scale flows and oscillations. It is shown that neglecting the straylight errors may cause spurious large scale velocity fields, oscillations and erronous values for the solar rotation and limb effect. The influence of active regions on full disc velocity measurements has been studied. It is shown that a 13 day periodicity in the global velocity signal will be introduced by the passage of sunspots over the solar disc. With different types of low resolution apertures, other periodicities may be introduced. Accurate measurements of the center-to-limb velocity shift are presented for a set of magnetic insensitive lines well suited for solar velocity measurements. The absolute wavelenght shifts are briefly discussed. The stronger lines have a ''supergravitational'' shift of 300-400 m/s at the solar limb. The results may be explained by the presence of a 20-25 m/s poleward meridional flow and a latitudinal dependence of the granular parameters. Using a simple model it is shown that the main properites of the observations are explained by a 5% increase in the granular size with latitude. Data presented indicate that the resonance line K I, 769.9 nm has a small but significant limb effect of 125 m/s from center to limb

  18. On soft limits of large-scale structure correlation functions

    International Nuclear Information System (INIS)

    Sagunski, Laura

    2016-08-01

    Large-scale structure surveys have the potential to become the leading probe for precision cosmology in the next decade. To extract valuable information on the cosmological evolution of the Universe from the observational data, it is of major importance to derive accurate theoretical predictions for the statistical large-scale structure observables, such as the power spectrum and the bispectrum of (dark) matter density perturbations. Hence, one of the greatest challenges of modern cosmology is to theoretically understand the non-linear dynamics of large-scale structure formation in the Universe from first principles. While analytic approaches to describe the large-scale structure formation are usually based on the framework of non-relativistic cosmological perturbation theory, we pursue another road in this thesis and develop methods to derive generic, non-perturbative statements about large-scale structure correlation functions. We study unequal- and equal-time correlation functions of density and velocity perturbations in the limit where one of their wavenumbers becomes small, that is, in the soft limit. In the soft limit, it is possible to link (N+1)-point and N-point correlation functions to non-perturbative 'consistency conditions'. These provide in turn a powerful tool to test fundamental aspects of the underlying theory at hand. In this work, we first rederive the (resummed) consistency conditions at unequal times by using the so-called eikonal approximation. The main appeal of the unequal-time consistency conditions is that they are solely based on symmetry arguments and thus are universal. Proceeding from this, we direct our attention to consistency conditions at equal times, which, on the other hand, depend on the interplay between soft and hard modes. We explore the existence and validity of equal-time consistency conditions within and beyond perturbation theory. For this purpose, we investigate the predictions for the soft limit of the

  19. On soft limits of large-scale structure correlation functions

    Energy Technology Data Exchange (ETDEWEB)

    Sagunski, Laura

    2016-08-15

    Large-scale structure surveys have the potential to become the leading probe for precision cosmology in the next decade. To extract valuable information on the cosmological evolution of the Universe from the observational data, it is of major importance to derive accurate theoretical predictions for the statistical large-scale structure observables, such as the power spectrum and the bispectrum of (dark) matter density perturbations. Hence, one of the greatest challenges of modern cosmology is to theoretically understand the non-linear dynamics of large-scale structure formation in the Universe from first principles. While analytic approaches to describe the large-scale structure formation are usually based on the framework of non-relativistic cosmological perturbation theory, we pursue another road in this thesis and develop methods to derive generic, non-perturbative statements about large-scale structure correlation functions. We study unequal- and equal-time correlation functions of density and velocity perturbations in the limit where one of their wavenumbers becomes small, that is, in the soft limit. In the soft limit, it is possible to link (N+1)-point and N-point correlation functions to non-perturbative 'consistency conditions'. These provide in turn a powerful tool to test fundamental aspects of the underlying theory at hand. In this work, we first rederive the (resummed) consistency conditions at unequal times by using the so-called eikonal approximation. The main appeal of the unequal-time consistency conditions is that they are solely based on symmetry arguments and thus are universal. Proceeding from this, we direct our attention to consistency conditions at equal times, which, on the other hand, depend on the interplay between soft and hard modes. We explore the existence and validity of equal-time consistency conditions within and beyond perturbation theory. For this purpose, we investigate the predictions for the soft limit of the

  20. Large-scale exact diagonalizations reveal low-momentum scales of nuclei

    Science.gov (United States)

    Forssén, C.; Carlsson, B. D.; Johansson, H. T.; Sääf, D.; Bansal, A.; Hagen, G.; Papenbrock, T.

    2018-03-01

    Ab initio methods aim to solve the nuclear many-body problem with controlled approximations. Virtually exact numerical solutions for realistic interactions can only be obtained for certain special cases such as few-nucleon systems. Here we extend the reach of exact diagonalization methods to handle model spaces with dimension exceeding 1010 on a single compute node. This allows us to perform no-core shell model (NCSM) calculations for 6Li in model spaces up to Nmax=22 and to reveal the 4He+d halo structure of this nucleus. Still, the use of a finite harmonic-oscillator basis implies truncations in both infrared (IR) and ultraviolet (UV) length scales. These truncations impose finite-size corrections on observables computed in this basis. We perform IR extrapolations of energies and radii computed in the NCSM and with the coupled-cluster method at several fixed UV cutoffs. It is shown that this strategy enables information gain also from data that is not fully UV converged. IR extrapolations improve the accuracy of relevant bound-state observables for a range of UV cutoffs, thus making them profitable tools. We relate the momentum scale that governs the exponential IR convergence to the threshold energy for the first open decay channel. Using large-scale NCSM calculations we numerically verify this small-momentum scale of finite nuclei.

  1. Optimal Wind Energy Integration in Large-Scale Electric Grids

    Science.gov (United States)

    Albaijat, Mohammad H.

    The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create

  2. Solution approach for a large scale personnel transport system for a large company in Latin America

    Energy Technology Data Exchange (ETDEWEB)

    Garzón-Garnica, Eduardo-Arturo; Caballero-Morales, Santiago-Omar; Martínez-Flores, José-Luis

    2017-07-01

    The present paper focuses on the modelling and solution of a large-scale personnel transportation system in Mexico where many routes and vehicles are currently used to service 525 points. The routing system proposed can be applied to many cities in the Latin-American region. Design/methodology/approach: This system was modelled as a VRP model considering the use of real-world transit times, and the fact that routes start at the farthest point from the destination center. Experiments were performed on different sized sets of service points. As the size of the instances was increased, the performance of the heuristic method was assessed in comparison with the results of an exact algorithm, the results remaining very close between both. When the size of the instance was full-scale and the exact algorithm took too much time to solve the problem, then the heuristic algorithm provided a feasible solution. Supported by the validation with smaller scale instances, where the difference between both solutions was close to a 6%, the full –scale solution obtained with the heuristic algorithm was considered to be within that same range. Findings: The proposed modelling and solving method provided a solution that would produce significant savings in the daily operation of the routes. Originality/value: The urban distribution of the cities in Latin America is unique to other regions in the world. The general layout of the large cities in this region includes a small town center, usually antique, and a somewhat disordered outer region. The lack of a vehicle-centered urban planning poses distinct challenges for vehicle routing problems in the region. The use of a heuristic VRP combined with the results of an exact VRP, allowed the obtention of an improved routing plan specific to the requirements of the region.

  3. Solution approach for a large scale personnel transport system for a large company in Latin America

    International Nuclear Information System (INIS)

    Garzón-Garnica, Eduardo-Arturo; Caballero-Morales, Santiago-Omar; Martínez-Flores, José-Luis

    2017-01-01

    The present paper focuses on the modelling and solution of a large-scale personnel transportation system in Mexico where many routes and vehicles are currently used to service 525 points. The routing system proposed can be applied to many cities in the Latin-American region. Design/methodology/approach: This system was modelled as a VRP model considering the use of real-world transit times, and the fact that routes start at the farthest point from the destination center. Experiments were performed on different sized sets of service points. As the size of the instances was increased, the performance of the heuristic method was assessed in comparison with the results of an exact algorithm, the results remaining very close between both. When the size of the instance was full-scale and the exact algorithm took too much time to solve the problem, then the heuristic algorithm provided a feasible solution. Supported by the validation with smaller scale instances, where the difference between both solutions was close to a 6%, the full –scale solution obtained with the heuristic algorithm was considered to be within that same range. Findings: The proposed modelling and solving method provided a solution that would produce significant savings in the daily operation of the routes. Originality/value: The urban distribution of the cities in Latin America is unique to other regions in the world. The general layout of the large cities in this region includes a small town center, usually antique, and a somewhat disordered outer region. The lack of a vehicle-centered urban planning poses distinct challenges for vehicle routing problems in the region. The use of a heuristic VRP combined with the results of an exact VRP, allowed the obtention of an improved routing plan specific to the requirements of the region.

  4. Solution approach for a large scale personnel transport system for a large company in Latin America

    Directory of Open Access Journals (Sweden)

    Eduardo-Arturo Garzón-Garnica

    2017-10-01

    Full Text Available Purpose: The present paper focuses on the modelling and solution of a large-scale personnel transportation system in Mexico where many routes and vehicles are currently used to service 525 points. The routing system proposed can be applied to many cities in the Latin-American region. Design/methodology/approach: This system was modelled as a VRP model considering the use of real-world transit times, and the fact that routes start at the farthest point from the destination center. Experiments were performed on different sized sets of service points. As the size of the instances was increased, the performance of the heuristic method was assessed in comparison with the results of an exact algorithm, the results remaining very close between both.  When the size of the instance was full-scale and the exact algorithm took too much time to solve the problem, then the heuristic algorithm provided a feasible solution. Supported by the validation with smaller scale instances, where the difference between both solutions was close to a 6%, the full –scale solution obtained with the heuristic algorithm was considered to be within that same range. Findings: The proposed modelling and solving method provided a solution that would produce significant savings in the daily operation of the routes. Originality/value: The urban distribution of the cities in Latin America is unique to other regions in the world. The general layout of the large cities in this region includes a small town center, usually antique, and a somewhat disordered outer region. The lack of a vehicle-centered urban planning poses distinct challenges for vehicle routing problems in the region. The use of a heuristic VRP combined with the results of an exact VRP, allowed the obtention of an improved routing plan specific to the requirements of the region.

  5. Large-scale land transformations in Indonesia: The role of ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    ... enable timely responses to the impacts of large-scale land transformations in Central Kalimantan ... In partnership with UNESCO's Organization for Women in Science for the ... New funding opportunity for gender equality and climate change.

  6. Large-scale patterns in Rayleigh-Benard convection

    International Nuclear Information System (INIS)

    Hardenberg, J. von; Parodi, A.; Passoni, G.; Provenzale, A.; Spiegel, E.A.

    2008-01-01

    Rayleigh-Benard convection at large Rayleigh number is characterized by the presence of intense, vertically moving plumes. Both laboratory and numerical experiments reveal that the rising and descending plumes aggregate into separate clusters so as to produce large-scale updrafts and downdrafts. The horizontal scales of the aggregates reported so far have been comparable to the horizontal extent of the containers, but it has not been clear whether that represents a limitation imposed by domain size. In this work, we present numerical simulations of convection at sufficiently large aspect ratio to ascertain whether there is an intrinsic saturation scale for the clustering process when that ratio is large enough. From a series of simulations of Rayleigh-Benard convection with Rayleigh numbers between 10 5 and 10 8 and with aspect ratios up to 12π, we conclude that the clustering process has a finite horizontal saturation scale with at most a weak dependence on Rayleigh number in the range studied

  7. Resolute large scale mining company contribution to health services of

    African Journals Online (AJOL)

    Resolute large scale mining company contribution to health services of Lusu ... in terms of socio economic, health, education, employment, safe drinking water, ... The data were analyzed using Scientific Package for Social Science (SPSS).

  8. Large-Scale Agriculture and Outgrower Schemes in Ethiopia

    DEFF Research Database (Denmark)

    Wendimu, Mengistu Assefa

    , the impact of large-scale agriculture and outgrower schemes on productivity, household welfare and wages in developing countries is highly contentious. Chapter 1 of this thesis provides an introduction to the study, while also reviewing the key debate in the contemporary land ‘grabbing’ and historical large...... sugarcane outgrower scheme on household income and asset stocks. Chapter 5 examines the wages and working conditions in ‘formal’ large-scale and ‘informal’ small-scale irrigated agriculture. The results in Chapter 2 show that moisture stress, the use of untested planting materials, and conflict over land...... commands a higher wage than ‘formal’ large-scale agriculture, while rather different wage determination mechanisms exist in the two sectors. Human capital characteristics (education and experience) partly explain the differences in wages within the formal sector, but play no significant role...

  9. Personalized Opportunistic Computing for CMS at Large Scale

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    **Douglas Thain** is an Associate Professor of Computer Science and Engineering at the University of Notre Dame, where he designs large scale distributed computing systems to power the needs of advanced science and...

  10. Bottom-Up Accountability Initiatives and Large-Scale Land ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Corey Piccioni

    fuel/energy, climate, and finance has occurred and one of the most ... this wave of large-scale land acquisitions. In fact, esti- ... Environmental Rights Action/Friends of the Earth,. Nigeria ... map the differentiated impacts (gender, ethnicity,.

  11. Large-scale linear programs in planning and prediction.

    Science.gov (United States)

    2017-06-01

    Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...

  12. Bottom-Up Accountability Initiatives and Large-Scale Land ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    ... Security can help increase accountability for large-scale land acquisitions in ... to build decent economic livelihoods and participate meaningfully in decisions ... its 2017 call for proposals to establish Cyber Policy Centres in the Global South.

  13. Parallel Framework for Dimensionality Reduction of Large-Scale Datasets

    Directory of Open Access Journals (Sweden)

    Sai Kiranmayee Samudrala

    2015-01-01

    Full Text Available Dimensionality reduction refers to a set of mathematical techniques used to reduce complexity of the original high-dimensional data, while preserving its selected properties. Improvements in simulation strategies and experimental data collection methods are resulting in a deluge of heterogeneous and high-dimensional data, which often makes dimensionality reduction the only viable way to gain qualitative and quantitative understanding of the data. However, existing dimensionality reduction software often does not scale to datasets arising in real-life applications, which may consist of thousands of points with millions of dimensions. In this paper, we propose a parallel framework for dimensionality reduction of large-scale data. We identify key components underlying the spectral dimensionality reduction techniques, and propose their efficient parallel implementation. We show that the resulting framework can be used to process datasets consisting of millions of points when executed on a 16,000-core cluster, which is beyond the reach of currently available methods. To further demonstrate applicability of our framework we perform dimensionality reduction of 75,000 images representing morphology evolution during manufacturing of organic solar cells in order to identify how processing parameters affect morphology evolution.

  14. No Large Scale Curvature Perturbations during Waterfall of Hybrid Inflation

    OpenAIRE

    Abolhasani, Ali Akbar; Firouzjahi, Hassan

    2010-01-01

    In this paper the possibility of generating large scale curvature perturbations induced from the entropic perturbations during the waterfall phase transition of standard hybrid inflation model is studied. We show that whether or not appreciable amounts of large scale curvature perturbations are produced during the waterfall phase transition depend crucially on the competition between the classical and the quantum mechanical back-reactions to terminate inflation. If one considers only the clas...

  15. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  16. Capabilities of the Large-Scale Sediment Transport Facility

    Science.gov (United States)

    2016-04-01

    pump flow meters, sediment trap weigh tanks , and beach profiling lidar. A detailed discussion of the original LSTF features and capabilities can be...ERDC/CHL CHETN-I-88 April 2016 Approved for public release; distribution is unlimited. Capabilities of the Large-Scale Sediment Transport...describes the Large-Scale Sediment Transport Facility (LSTF) and recent upgrades to the measurement systems. The purpose of these upgrades was to increase

  17. Comparative Analysis of Different Protocols to Manage Large Scale Networks

    OpenAIRE

    Anil Rao Pimplapure; Dr Jayant Dubey; Prashant Sen

    2013-01-01

    In recent year the numbers, complexity and size is increased in Large Scale Network. The best example of Large Scale Network is Internet, and recently once are Data-centers in Cloud Environment. In this process, involvement of several management tasks such as traffic monitoring, security and performance optimization is big task for Network Administrator. This research reports study the different protocols i.e. conventional protocols like Simple Network Management Protocol and newly Gossip bas...

  18. Large Scale Metric Learning for Distance-Based Image Classification on Open Ended Data Sets

    NARCIS (Netherlands)

    Mensink, T.; Verbeek, J.; Perronnin, F.; Csurka, G.; Farinella, G.M.; Battiato, S.; Cipolla, R,

    2013-01-01

    Many real-life large-scale datasets are open-ended and dynamic: new images are continuously added to existing classes, new classes appear over time, and the semantics of existing classes might evolve too. Therefore, we study large-scale image classification methods that can incorporate new classes

  19. Decentralised stabilising controllers for a class of large-scale linear ...

    Indian Academy of Sciences (India)

    subsystems resulting from a new aggregation-decomposition technique. The method has been illustrated through a numerical example of a large-scale linear system consisting of three subsystems each of the fourth order. Keywords. Decentralised stabilisation; large-scale linear systems; optimal feedback control; algebraic ...

  20. Large-scale coastal impact induced by a catastrophic storm

    DEFF Research Database (Denmark)

    Fruergaard, Mikkel; Andersen, Thorbjørn Joest; Johannessen, Peter N

    breaching. Our results demonstrate that violent, millennial-scale storms can trigger significant large-scale and long-term changes on barrier coasts, and that coastal changes assumed to take place over centuries or even millennia may occur in association with a single extreme storm event....

  1. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  2. Large scale particle image velocimetry with helium filled soap bubbles

    Energy Technology Data Exchange (ETDEWEB)

    Bosbach, Johannes; Kuehn, Matthias; Wagner, Claus [German Aerospace Center (DLR), Institute of Aerodynamics and Flow Technology, Goettingen (Germany)

    2009-03-15

    The application of particle image velocimetry (PIV) to measurement of flows on large scales is a challenging necessity especially for the investigation of convective air flows. Combining helium filled soap bubbles as tracer particles with high power quality switched solid state lasers as light sources allows conducting PIV on scales of the order of several square meters. The technique was applied to mixed convection in a full scale double aisle aircraft cabin mock-up for validation of computational fluid dynamics simulations. (orig.)

  3. Large scale particle image velocimetry with helium filled soap bubbles

    Science.gov (United States)

    Bosbach, Johannes; Kühn, Matthias; Wagner, Claus

    2009-03-01

    The application of Particle Image Velocimetry (PIV) to measurement of flows on large scales is a challenging necessity especially for the investigation of convective air flows. Combining helium filled soap bubbles as tracer particles with high power quality switched solid state lasers as light sources allows conducting PIV on scales of the order of several square meters. The technique was applied to mixed convection in a full scale double aisle aircraft cabin mock-up for validation of Computational Fluid Dynamics simulations.

  4. Challenges in Managing Trustworthy Large-scale Digital Science

    Science.gov (United States)

    Evans, B. J. K.

    2017-12-01

    The increased use of large-scale international digital science has opened a number of challenges for managing, handling, using and preserving scientific information. The large volumes of information are driven by three main categories - model outputs including coupled models and ensembles, data products that have been processing to a level of usability, and increasingly heuristically driven data analysis. These data products are increasingly the ones that are usable by the broad communities, and far in excess of the raw instruments data outputs. The data, software and workflows are then shared and replicated to allow broad use at an international scale, which places further demands of infrastructure to support how the information is managed reliably across distributed resources. Users necessarily rely on these underlying "black boxes" so that they are productive to produce new scientific outcomes. The software for these systems depend on computational infrastructure, software interconnected systems, and information capture systems. This ranges from the fundamentals of the reliability of the compute hardware, system software stacks and libraries, and the model software. Due to these complexities and capacity of the infrastructure, there is an increased emphasis of transparency of the approach and robustness of the methods over the full reproducibility. Furthermore, with large volume data management, it is increasingly difficult to store the historical versions of all model and derived data. Instead, the emphasis is on the ability to access the updated products and the reliability by which both previous outcomes are still relevant and can be updated for the new information. We will discuss these challenges and some of the approaches underway that are being used to address these issues.

  5. Complex modular structure of large-scale brain networks

    Science.gov (United States)

    Valencia, M.; Pastor, M. A.; Fernández-Seara, M. A.; Artieda, J.; Martinerie, J.; Chavez, M.

    2009-06-01

    Modular structure is ubiquitous among real-world networks from related proteins to social groups. Here we analyze the modular organization of brain networks at a large scale (voxel level) extracted from functional magnetic resonance imaging signals. By using a random-walk-based method, we unveil the modularity of brain webs and show modules with a spatial distribution that matches anatomical structures with functional significance. The functional role of each node in the network is studied by analyzing its patterns of inter- and intramodular connections. Results suggest that the modular architecture constitutes the structural basis for the coexistence of functional integration of distant and specialized brain areas during normal brain activities at rest.

  6. Large-scale experience with biological treatment of contaminated soil

    International Nuclear Information System (INIS)

    Schulz-Berendt, V.; Poetzsch, E.

    1995-01-01

    The efficiency of biological methods for the cleanup of soil contaminated with total petroleum hydrocarbons (TPH) and polycyclic aromatic hydrocarbons (PAH) was demonstrated by a large-scale example in which 38,000 tons of TPH- and PAH-polluted soil was treated onsite with the TERRAFERM reg-sign degradation system to reach the target values of 300 mg/kg TPH and 5 mg/kg PAH. Detection of the ecotoxicological potential (Microtox reg-sign assay) showed a significant decrease during the remediation. Low concentrations of PAH in the ground were treated by an in situ technology. The in situ treatment was combined with mechanical measures (slurry wall) to prevent the contamination from dispersing from the site

  7. Policy Driven Development: Flexible Policy Insertion for Large Scale Systems.

    Science.gov (United States)

    Demchak, Barry; Krüger, Ingolf

    2012-07-01

    The success of a software system depends critically on how well it reflects and adapts to stakeholder requirements. Traditional development methods often frustrate stakeholders by creating long latencies between requirement articulation and system deployment, especially in large scale systems. One source of latency is the maintenance of policy decisions encoded directly into system workflows at development time, including those involving access control and feature set selection. We created the Policy Driven Development (PDD) methodology to address these development latencies by enabling the flexible injection of decision points into existing workflows at runtime , thus enabling policy composition that integrates requirements furnished by multiple, oblivious stakeholder groups. Using PDD, we designed and implemented a production cyberinfrastructure that demonstrates policy and workflow injection that quickly implements stakeholder requirements, including features not contemplated in the original system design. PDD provides a path to quickly and cost effectively evolve such applications over a long lifetime.

  8. Distributed system for large-scale remote research

    International Nuclear Information System (INIS)

    Ueshima, Yutaka

    2002-01-01

    In advanced photon research, large-scale simulations and high-resolution observations are powerfull tools. In numerical and real experiments, the real-time visualization and steering system is considered as a hopeful method of data analysis. This approach is valid in the typical analysis at one time or low cost experiment and simulation. In research of an unknown problem, it is necessary that the output data be analyzed many times because conclusive analysis is difficult at one time. Consequently, output data should be filed to refer and analyze at any time. To support research, we need the automatic functions, transporting data files from data generator to data storage, analyzing data, tracking history of data handling, and so on. The supporting system will be a functionally distributed system. (author)

  9. Automatic Installation and Configuration for Large Scale Farms

    CERN Document Server

    Novák, J

    2005-01-01

    Since the early appearance of commodity hardware, the utilization of computers rose rapidly, and they became essential in all areas of life. Soon it was realized that nodes are able to work cooperatively, in order to solve new, more complex tasks. This conception got materialized in coherent aggregations of computers called farms and clusters. Collective application of nodes, being efficient and economical, was adopted in education, research and industry before long. But maintainance, especially in large scale, appeared as a problem to be resolved. New challenges needed new methods and tools. Development work has been started to build farm management applications and frameworks. In the first part of the thesis, these systems are introduced. After a general description of the matter, a comparative analysis of different approaches and tools illustrates the practical aspects of the theoretical discussion. CERN, the European Organization of Nuclear Research is the largest Particle Physics laboratory in the world....

  10. Hierarchical optimal control of large-scale nonlinear chemical processes.

    Science.gov (United States)

    Ramezani, Mohammad Hossein; Sadati, Nasser

    2009-01-01

    In this paper, a new approach is presented for optimal control of large-scale chemical processes. In this approach, the chemical process is decomposed into smaller sub-systems at the first level, and a coordinator at the second level, for which a two-level hierarchical control strategy is designed. For this purpose, each sub-system in the first level can be solved separately, by using any conventional optimization algorithm. In the second level, the solutions obtained from the first level are coordinated using a new gradient-type strategy, which is updated by the error of the coordination vector. The proposed algorithm is used to solve the optimal control problem of a complex nonlinear chemical stirred tank reactor (CSTR), where its solution is also compared with the ones obtained using the centralized approach. The simulation results show the efficiency and the capability of the proposed hierarchical approach, in finding the optimal solution, over the centralized method.

  11. Cosmological Parameter Estimation with Large Scale Structure Observations

    CERN Document Server

    Di Dio, Enea; Durrer, Ruth; Lesgourgues, Julien

    2014-01-01

    We estimate the sensitivity of future galaxy surveys to cosmological parameters, using the redshift dependent angular power spectra of galaxy number counts, $C_\\ell(z_1,z_2)$, calculated with all relativistic corrections at first order in perturbation theory. We pay special attention to the redshift dependence of the non-linearity scale and present Fisher matrix forecasts for Euclid-like and DES-like galaxy surveys. We compare the standard $P(k)$ analysis with the new $C_\\ell(z_1,z_2)$ method. We show that for surveys with photometric redshifts the new analysis performs significantly better than the $P(k)$ analysis. For spectroscopic redshifts, however, the large number of redshift bins which would be needed to fully profit from the redshift information, is severely limited by shot noise. We also identify surveys which can measure the lensing contribution and we study the monopole, $C_0(z_1,z_2)$.

  12. Complex Formation Control of Large-Scale Intelligent Autonomous Vehicles

    Directory of Open Access Journals (Sweden)

    Ming Lei

    2012-01-01

    Full Text Available A new formation framework of large-scale intelligent autonomous vehicles is developed, which can realize complex formations while reducing data exchange. Using the proposed hierarchy formation method and the automatic dividing algorithm, vehicles are automatically divided into leaders and followers by exchanging information via wireless network at initial time. Then, leaders form formation geometric shape by global formation information and followers track their own virtual leaders to form line formation by local information. The formation control laws of leaders and followers are designed based on consensus algorithms. Moreover, collision-avoiding problems are considered and solved using artificial potential functions. Finally, a simulation example that consists of 25 vehicles shows the effectiveness of theory.

  13. Large scale intender test program to measure sub gouge displacements

    Energy Technology Data Exchange (ETDEWEB)

    Been, Ken; Lopez, Juan [Golder Associates Inc, Houston, TX (United States); Sancio, Rodolfo [MMI Engineering Inc., Houston, TX (United States)

    2011-07-01

    The production of submarine pipelines in an offshore environment covered with ice is very challenging. Several precautions must be taken such as burying the pipelines to protect them from ice movement caused by gouging. The estimation of the subgouge displacements is a key factor in pipeline design for ice gouged environments. This paper investigated a method to measure subgouge displacements. An experimental program was implemented in an open field to produce large scale idealized gouges on engineered soil beds (sand and clay). The horizontal force required to produce the gouge, the subgouge displacements in the soil and the strain imposed by these displacements were monitored on a buried model pipeline. The results showed that for a given keel, the gouge depth was inversely proportional to undrained shear strength in clay. The subgouge displacements measured did not show a relationship with the gouge depth, width or soil density in sand and clay tests.

  14. Large-scale Ising-machines composed of magnetic neurons

    Science.gov (United States)

    Mizushima, Koichi; Goto, Hayato; Sato, Rie

    2017-10-01

    We propose Ising-machines composed of magnetic neurons, that is, magnetic bits in a recording track. In large-scale machines, the sizes of both neurons and synapses need to be reduced, and neat and smart connections among neurons are also required to achieve all-to-all connectivity among them. These requirements can be fulfilled by adopting magnetic recording technologies such as race-track memories and skyrmion tracks because the area of a magnetic bit is almost two orders of magnitude smaller than that of static random access memory, which has normally been used as a semiconductor neuron, and the smart connections among neurons are realized by using the read and write methods of these technologies.

  15. Efficient Topology Estimation for Large Scale Optical Mapping

    CERN Document Server

    Elibol, Armagan; Garcia, Rafael

    2013-01-01

    Large scale optical mapping methods are in great demand among scientists who study different aspects of the seabed, and have been fostered by impressive advances in the capabilities of underwater robots in gathering optical data from the seafloor. Cost and weight constraints mean that low-cost ROVs usually have a very limited number of sensors. When a low-cost robot carries out a seafloor survey using a down-looking camera, it usually follows a predefined trajectory that provides several non time-consecutive overlapping image pairs. Finding these pairs (a process known as topology estimation) is indispensable to obtaining globally consistent mosaics and accurate trajectory estimates, which are necessary for a global view of the surveyed area, especially when optical sensors are the only data source. This book contributes to the state-of-art in large area image mosaicing methods for underwater surveys using low-cost vehicles equipped with a very limited sensor suite. The main focus has been on global alignment...

  16. Large-scale functional purification of recombinant HIV-1 capsid.

    Directory of Open Access Journals (Sweden)

    Magdeleine Hung

    Full Text Available During human immunodeficiency virus type-1 (HIV-1 virion maturation, capsid proteins undergo a major rearrangement to form a conical core that protects the viral nucleoprotein complexes. Mutations in the capsid sequence that alter the stability of the capsid core are deleterious to viral infectivity and replication. Recently, capsid assembly has become an attractive target for the development of a new generation of anti-retroviral agents. Drug screening efforts and subsequent structural and mechanistic studies require gram quantities of active, homogeneous and pure protein. Conventional means of laboratory purification of Escherichia coli expressed recombinant capsid protein rely on column chromatography steps that are not amenable to large-scale production. Here we present a function-based purification of wild-type and quadruple mutant capsid proteins, which relies on the inherent propensity of capsid protein to polymerize and depolymerize. This method does not require the packing of sizable chromatography columns and can generate double-digit gram quantities of functionally and biochemically well-behaved proteins with greater than 98% purity. We have used the purified capsid protein to characterize two known assembly inhibitors in our in-house developed polymerization assay and to measure their binding affinities. Our capsid purification procedure provides a robust method for purifying large quantities of a key protein in the HIV-1 life cycle, facilitating identification of the next generation anti-HIV agents.

  17. Research on large-scale wind farm modeling

    Science.gov (United States)

    Ma, Longfei; Zhang, Baoqun; Gong, Cheng; Jiao, Ran; Shi, Rui; Chi, Zhongjun; Ding, Yifeng

    2017-01-01

    Due to intermittent and adulatory properties of wind energy, when large-scale wind farm connected to the grid, it will have much impact on the power system, which is different from traditional power plants. Therefore it is necessary to establish an effective wind farm model to simulate and analyze the influence wind farms have on the grid as well as the transient characteristics of the wind turbines when the grid is at fault. However we must first establish an effective WTGs model. As the doubly-fed VSCF wind turbine has become the mainstream wind turbine model currently, this article first investigates the research progress of doubly-fed VSCF wind turbine, and then describes the detailed building process of the model. After that investigating the common wind farm modeling methods and pointing out the problems encountered. As WAMS is widely used in the power system, which makes online parameter identification of the wind farm model based on off-output characteristics of wind farm be possible, with a focus on interpretation of the new idea of identification-based modeling of large wind farms, which can be realized by two concrete methods.

  18. Experimental study on dynamic behavior of large scale foundation, 1

    International Nuclear Information System (INIS)

    Hanada, Kazufumi; Sawada, Yoshihiro; Esashi, Yasuyuki; Ueshima, Teruyuki; Nakamura, Hideharu

    1983-01-01

    The large-sized, high performance vibrating table in the Nuclear Power Engineering Test Center is installed on a large-scale concrete foundation of length 90.9 m, width 44.8 m and maximum thickness 21 m, weighing 150,000 tons. Through the experimental study on the behavior of the foundation, which is set on gravel ground, useful information should be obtained on the siting of a nuclear power plant on the Quaternary stratum ground. The objective of research is to grasp the vibration characteristics of the foundation during the vibration of the table to evaluate the interaction between the foundation and the ground, and to evaluate an analytical method for numerically simulating the vibration behavior. In the present study, the vibration behavior of the foundation was clarified by measurement, and in order to predict the vibration behavior, the semi-infinite theory of elasticity was applied. The accuracy of this analytical method was demonstrated by comparison with the measured results. (Mori, K.)

  19. Protein homology model refinement by large-scale energy optimization.

    Science.gov (United States)

    Park, Hahnbeom; Ovchinnikov, Sergey; Kim, David E; DiMaio, Frank; Baker, David

    2018-03-20

    Proteins fold to their lowest free-energy structures, and hence the most straightforward way to increase the accuracy of a partially incorrect protein structure model is to search for the lowest-energy nearby structure. This direct approach has met with little success for two reasons: first, energy function inaccuracies can lead to false energy minima, resulting in model degradation rather than improvement; and second, even with an accurate energy function, the search problem is formidable because the energy only drops considerably in the immediate vicinity of the global minimum, and there are a very large number of degrees of freedom. Here we describe a large-scale energy optimization-based refinement method that incorporates advances in both search and energy function accuracy that can substantially improve the accuracy of low-resolution homology models. The method refined low-resolution homology models into correct folds for 50 of 84 diverse protein families and generated improved models in recent blind structure prediction experiments. Analyses of the basis for these improvements reveal contributions from both the improvements in conformational sampling techniques and the energy function.

  20. Response matrix method for large LMFBR analysis

    International Nuclear Information System (INIS)

    King, M.J.

    1977-06-01

    The feasibility of using response matrix techniques for computational models of large LMFBRs is examined. Since finite-difference methods based on diffusion theory have generally found a place in fast-reactor codes, a brief review of their general matrix foundation is given first in order to contrast it to the general strategy of response matrix methods. Then, in order to present the general method of response matrix technique, two illustrative examples are given. Matrix algorithms arising in the application to large LMFBRs are discussed, and the potential of the response matrix method is explored for a variety of computational problems. Principal properties of the matrices involved are derived with a view to application of numerical methods of solution. The Jacobi iterative method as applied to the current-balance eigenvalue problem is discussed